Optimal sampling strategies for detecting zoonotic disease epidemics.
Directory of Open Access Journals (Sweden)
Jake M Ferguson
2014-06-01
Full Text Available The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
Optimal sampling strategies for detecting zoonotic disease epidemics.
Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W
2014-06-01
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
A proposal of optimal sampling design using a modularity strategy
Simone, A.; Giustolisi, O.; Laucelli, D. B.
2016-08-01
In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.
Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A
2018-05-01
High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2 = 0.98; p strategy promises to achieve the target area under the curve as part of precision dosing.
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this
Yousef, A M; Melhem, M; Xue, B; Arafat, T; Reynolds, D K; Van Wart, S A
2013-05-01
Clopidogrel is metabolized primarily into an inactive carboxyl metabolite (clopidogrel-IM) or to a lesser extent an active thiol metabolite. A population pharmacokinetic (PK) model was developed using NONMEM(®) to describe the time course of clopidogrel-IM in plasma and to design a sparse-sampling strategy to predict clopidogrel-IM exposures for use in characterizing anti-platelet activity. Serial blood samples from 76 healthy Jordanian subjects administered a single 75 mg oral dose of clopidogrel were collected and assayed for clopidogrel-IM using reverse phase high performance liquid chromatography. A two-compartment (2-CMT) PK model with first-order absorption and elimination plus an absorption lag-time was evaluated, as well as a variation of this model designed to mimic enterohepatic recycling (EHC). Optimal PK sampling strategies (OSS) were determined using WinPOPT based upon collection of 3-12 post-dose samples. A two-compartment model with EHC provided the best fit and reduced bias in C(max) (median prediction error (PE%) of 9.58% versus 12.2%) relative to the basic two-compartment model, AUC(0-24) was similar for both models (median PE% = 1.39%). The OSS for fitting the two-compartment model with EHC required the collection of seven samples (0.25, 1, 2, 4, 5, 6 and 12 h). Reasonably unbiased and precise exposures were obtained when re-fitting this model to a reduced dataset considering only these sampling times. A two-compartment model considering EHC best characterized the time course of clopidogrel-IM in plasma. Use of the suggested OSS will allow for the collection of fewer PK samples when assessing clopidogrel-IM exposures. Copyright © 2013 John Wiley & Sons, Ltd.
Rats track odour trails accurately using a multi-layered strategy with near-optimal sampling.
Khan, Adil Ghani; Sarangi, Manaswini; Bhalla, Upinder Singh
2012-02-28
Tracking odour trails is a crucial behaviour for many animals, often leading to food, mates or away from danger. It is an excellent example of active sampling, where the animal itself controls how to sense the environment. Here we show that rats can track odour trails accurately with near-optimal sampling. We trained rats to follow odour trails drawn on paper spooled through a treadmill. By recording local field potentials (LFPs) from the olfactory bulb, and sniffing rates, we find that sniffing but not LFPs differ between tracking and non-tracking conditions. Rats can track odours within ~1 cm, and this accuracy is degraded when one nostril is closed. Moreover, they show path prediction on encountering a fork, wide 'casting' sweeps on encountering a gap and detection of reappearance of the trail in 1-2 sniffs. We suggest that rats use a multi-layered strategy, and achieve efficient sampling and high accuracy in this complex task.
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
International Nuclear Information System (INIS)
Hampton, Jerrad; Doostan, Alireza
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
Hampton, Jerrad; Doostan, Alireza
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.
Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi
2015-12-01
A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.
Directory of Open Access Journals (Sweden)
Khosro Mehdi Khanlou
2011-01-01
Full Text Available Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He and Shannon diversity index (I were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation.
Optimal experiment design in a filtering context with application to sampled network data
Singhal, Harsh; Michailidis, George
2010-01-01
We examine the problem of optimal design in the context of filtering multiple random walks. Specifically, we define the steady state E-optimal design criterion and show that the underlying optimization problem leads to a second order cone program. The developed methodology is applied to tracking network flow volumes using sampled data, where the design variable corresponds to controlling the sampling rate. The optimal design is numerically compared to a myopic and a naive strategy. Finally, w...
Optimal strategies for pricing general insurance
Emms, P.; Haberman, S.; Savoulli, I.
2006-01-01
Optimal premium pricing policies in a competitive insurance environment are investigated using approximation methods and simulation of sample paths. The market average premium is modelled as a diffusion process, with the premium as the control function and the maximization of the expected total utility of wealth, over a finite time horizon, as the objective. In order to simplify the optimisation problem, a linear utility function is considered and two particular premium strategies are adopted...
Multi-infill strategy for kriging models used in variable fidelity optimization
Directory of Open Access Journals (Sweden)
Chao SONG
2018-03-01
Full Text Available In this paper, a computationally efficient optimization method for aerodynamic design has been developed. The low-fidelity model and the multi-infill strategy are utilized in this approach. Low-fidelity data is employed to provide a good global trend for model prediction, and multiple sample points chosen by different infill criteria in each updating cycle are used to enhance the exploitation and exploration ability of the optimization approach. Take the advantages of low-fidelity model and the multi-infill strategy, and no initial sample for the high-fidelity model is needed. This approach is applied to an airfoil design case and a high-dimensional wing design case. It saves a large number of high-fidelity function evaluations for initial model construction. What’s more, faster reduction of an aerodynamic function is achieved, when compared to ordinary kriging using the multi-infill strategy and variable-fidelity model using single infill criterion. The results indicate that the developed approach has a promising application to efficient aerodynamic design when high-fidelity analyses are involved. Keywords: Aerodynamics, Infill criteria, Kriging models, Multi-infill, Optimization
Optimal management strategies in variable environments: Stochastic optimal control methods
Williams, B.K.
1985-01-01
Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both
Optimal sampling strategy for data mining
International Nuclear Information System (INIS)
Ghaffar, A.; Shahbaz, M.; Mahmood, W.
2013-01-01
Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)
Determining an optimal supply chain strategy
Directory of Open Access Journals (Sweden)
Intaher M. Ambe
2012-11-01
Full Text Available In today’s business environment, many companies want to become efficient and flexible, but have struggled, in part, because they have not been able to formulate optimal supply chain strategies. Often this is as a result of insufficient knowledge about the costs involved in maintaining supply chains and the impact of the supply chain on their operations. Hence, these companies find it difficult to manufacture at a competitive cost and respond quickly and reliably to market demand. Mismatched strategies are the root cause of the problems that plague supply chains, and supply-chain strategies based on a one-size-fits-all strategy often fail. The purpose of this article is to suggest instruments to determine an optimal supply chain strategy. This article, which is conceptual in nature, provides a review of current supply chain strategies and suggests a framework for determining an optimal strategy.
Optimal energy management strategy for self-reconfigurable batteries
International Nuclear Information System (INIS)
Bouchhima, Nejmeddine; Schnierle, Marc; Schulte, Sascha; Birke, Kai Peter
2017-01-01
This paper proposes a novel energy management strategy for multi-cell high voltage batteries where the current through each cell can be controlled, called self-reconfigurable batteries. An optimized control strategy further enhances the energy efficiency gained by the hardware architecture of those batteries. Currently, achieving cell equalization by using the active balancing circuits is considered as the best way to optimize the energy efficiency of the battery pack. This study demonstrates that optimizing the energy efficiency of self-reconfigurable batteries is no more strongly correlated to the cell balancing. According to the features of this novel battery architecture, the energy management strategy is formulated as nonlinear dynamic optimization problem. To solve this optimal control, an optimization algorithm that generates the optimal discharge policy for a given driving cycle is developed based on dynamic programming and code vectorization. The simulation results show that the designed energy management strategy maximizes the system efficiency across the battery lifetime over conventional approaches. Furthermore, the present energy management strategy can be implemented online due to the reduced complexity of the optimization algorithm. - Highlights: • The energy efficiency of self-reconfigurable batteries is maximized. • The energy management strategy for the battery is formulated as optimal control problem. • Developing an optimization algorithm using dynamic programming techniques and code vectorization. • Simulation studies are conducted to validate the proposed optimal strategy.
Synthesis of Optimal Strategies Using HyTech
DEFF Research Database (Denmark)
Bouyer, Patricia; Cassez, Franck; Larsen, Kim Guldstrand
2005-01-01
Priced timed (game) automata extend timed (game) automata with costs on both locations and transitions. The problem of synthesizing an optimal winning strategy for a priced timed game under some hypotheses has been shown decidable in [P. Bouyer, F. Cassez, E. Fleury, and K.G. Larsen. Optimal...... strategies in priced timed game automata. Research Report BRICS RS-04-4, Denmark, Feb. 2004. Available at http://www.brics.dk/RS/04/4/]. In this paper, we present an algorithm for computing the optimal cost and for synthesizing an optimal strategy in case there exists one. We also describe the implementation...
International Nuclear Information System (INIS)
Oliver, Mike; Jensen, Michael; Chen, Jeff; Wong, Eugene
2009-01-01
Intensity-modulated arc therapy (IMAT) is a rotational variant of intensity-modulated radiation therapy (IMRT) that can be implemented with or without angular dose rate variation. The purpose of this study is to assess optimization strategies and initial conditions using a leaf position optimization (LPO) algorithm altered for variable dose rate IMAT. A concave planning target volume (PTV) with a central cylindrical organ at risk (OAR) was used in this study. The initial IMAT arcs were approximated by multiple static beams at 5 deg. angular increments where multi-leaf collimator (MLC) leaf positions were determined from the beam's eye view to irradiate the PTV but avoid the OAR. For the optimization strategy, two arcs with arc ranges of 280 deg. and 150 deg. were employed and plans were created using LPO alone, variable dose rate optimization (VDRO) alone, simultaneous LPO and VDRO and sequential combinations of these strategies. To assess the MLC initialization effect, three single 360 deg. arc plans with different initial MLC configurations were generated using the simultaneous LPO and VDRO. The effect of changing optimization degrees of freedom was investigated by employing 3 deg., 5 deg. and 10 deg. angular sampling intervals for the two 280 deg., two 150 deg. and single arc plans using LPO and VDRO. The objective function value, a conformity index, a dose homogeneity index, mean dose to OAR and normal tissues were computed and used to evaluate the treatment plans. This study shows that the best optimization strategy for a concave target is to use simultaneous MLC LPO and VDRO. We found that the optimization result is sensitive to the choice of initial MLC aperture shapes suggesting that an LPO-based IMAT plan may not be able to overcome local minima for this geometry. In conclusion, simultaneous MLC leaf position and VDRO are needed with the most appropriate initial conditions (MLC positions, arc ranges and number of arcs) for IMAT.
The optimal sampling of outsourcing product
International Nuclear Information System (INIS)
Yang Chao; Pei Jiacheng
2014-01-01
In order to improve quality and cost, the sampling c = 0 has been introduced to the inspection of outsourcing product. According to the current quality level (p = 0.4%), we confirmed the optimal sampling that is: Ac = 0; if N ≤ 3000, n = 55; 3001 ≤ N ≤ 10000, n = 86; N ≥ 10001, n = 108. Through analyzing the OC curve, we came to the conclusion that when N ≤ 3000, the protective ability of optimal sampling for product quality is stronger than current sampling. Corresponding to the same 'consumer risk', the product quality of optimal sampling is superior to current sampling. (authors)
Zakoucka, Eva
2013-01-01
During my summer student programme I was working on sample optimization for a new β-NMR project at the ISOLDE facility. The β-NMR technique is well-established in solid-state physics and just recently it is being introduced for applications in biochemistry and life sciences. The β-NMR collaboration will be applying for beam time to the INTC committee in September for three nuclei: Cu, Zn and Mg. Sample optimization for Mg was already performed last year during the summer student programme. Therefore sample optimization for Cu and Zn had to be completed as well for the project proposal. My part in the project was to perform thorough literature research on techniques studying Cu and Zn complexes in native conditions, search for relevant binding candidates for Cu and Zn applicable for ß-NMR and eventually evaluate selected binding candidates using UV-VIS spectrometry.
Optimal GENCO bidding strategy
Gao, Feng
Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed
Directory of Open Access Journals (Sweden)
Yingxin Gu
2016-11-01
Full Text Available Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD between the predicted and actual NDVI (scaled NDVI, value from 0–200 and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4, which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.
spsann - optimization of sample patterns using spatial simulated annealing
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
computationally intensive method. As such, many strategies were used to reduce the computation time and memory usage: a) bottlenecks were implemented in C++, b) a finite set of candidate locations is used for perturbing the sample points, and c) data matrices are computed only once and then updated at each iteration instead of being recomputed. spsann is available at GitHub under a licence GLP Version 2.0 and will be further developed to: a) allow the use of a cost surface, b) implement other sensitive parts of the source code in C++, c) implement other optimizing criteria, d) allow to add or delete points to/from an existing point pattern.
Long-run savings and investment strategy optimization.
Gerrard, Russell; Guillén, Montserrat; Nielsen, Jens Perch; Pérez-Marín, Ana M
2014-01-01
We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investor's risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration.
Long-Run Savings and Investment Strategy Optimization
Directory of Open Access Journals (Sweden)
Russell Gerrard
2014-01-01
Full Text Available We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investor’s risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration.
Xun-Ping, W; An, Z
2017-07-27
Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.
Aggregators’ Optimal Bidding Strategy in Sequential Day-Ahead and Intraday Electricity Spot Markets
Directory of Open Access Journals (Sweden)
Xiaolin Ayón
2017-04-01
Full Text Available This paper proposes a probabilistic optimization method that produces optimal bidding curves to be submitted by an aggregator to the day-ahead electricity market and the intraday market, considering the flexible demand of his customers (based in time dependent resources such as batteries and shiftable demand and taking into account the possible imbalance costs as well as the uncertainty of forecasts (market prices, demand, and renewable energy sources (RES generation. The optimization strategy aims to minimize the total cost of the traded energy over a whole day, taking into account the intertemporal constraints. The proposed formulation leads to the solution of different linear optimization problems, following the natural temporal sequence of electricity spot markets. Intertemporal constraints regarding time dependent resources are fulfilled through a scheduling process performed after the day-ahead market clearing. Each of the different problems is of moderate dimension and requires short computation times. The benefits of the proposed strategy are assessed comparing the payments done by an aggregator over a sample period of one year following different deterministic and probabilistic strategies. Results show that probabilistic strategy reports better benefits for aggregators participating in power markets.
Optimized Power Dispatch Strategy for Offshore Wind Farms
DEFF Research Database (Denmark)
Hou, Peng; Hu, Weihao; Zhang, Baohua
2016-01-01
which are related to electrical system topology. This paper proposed an optimized power dispatch strategy (OPD) for minimizing the levelized production cost (LPC) of a wind farm. Particle swarm optimization (PSO) is employed to obtain final solution for the optimization problem. Both regular shape......Maximizing the power production of offshore wind farms using proper control strategy has become an important issue for wind farm operators. However, the power transmitted to the onshore substation (OS) is not only related to the power production of each wind turbine (WT) but also the power losses...... and irregular shape wind farm are chosen for the case study. The proposed dispatch strategy is compared with two other control strategies. The simulation results show the effectiveness of the proposed strategy....
The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations
Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.
2017-09-01
We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.
Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin
2017-08-15
Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimal Spatial Harvesting Strategy and Symmetry-Breaking
International Nuclear Information System (INIS)
Kurata, Kazuhiro; Shi Junping
2008-01-01
A reaction-diffusion model with logistic growth and constant effort harvesting is considered. By minimizing an intrinsic biological energy function, we obtain an optimal spatial harvesting strategy which will benefit the population the most. The symmetry properties of the optimal strategy are also discussed, and related symmetry preserving and symmetry breaking phenomena are shown with several typical examples of habitats
Optimal reactor strategy for commercializing fast breeder reactors
International Nuclear Information System (INIS)
Yamaji, Kenji; Nagano, Koji
1988-01-01
In this paper, a fuel cycle optimization model developed for analyzing the condition of selecting fast breeder reactors in the optimal reactor strategy is described. By dividing the period of planning, 1966-2055, into nine ten-year periods, the model was formulated as a compact linear programming model. With the model, the best mix of reactor types as well as the optimal timing of reprocessing spent fuel from LWRs to minimize the total cost were found. The results of the analysis are summarized as follows. Fast breeder reactors could be introduced in the optimal strategy when they can economically compete with LWRs with 30 year storage of spent fuel. In order that fast breeder reactors monopolize the new reactor market after the achievement of their technical availability, their capital cost should be less than 0.9 times as much as that of LWRs. When a certain amount of reprocessing commitment is assumed, the condition of employing fast breeder reactors in the optimal strategy is mitigated. In the optimal strategy, reprocessing is done just to meet plutonium demand, and the storage of spent fuel is selected to adjust the mismatch of plutonium production and utilization. The price hike of uranium ore facilitates the commercial adoption of fast breeder reactors. (Kako, I.)
Resolution optimization with irregularly sampled Fourier data
International Nuclear Information System (INIS)
Ferrara, Matthew; Parker, Jason T; Cheney, Margaret
2013-01-01
Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer–Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an ‘interrupted SAR’ dataset representative of in-band interference commonly encountered in very high frequency radar applications. (paper)
Optimal control of anthracnose using mixed strategies.
Fotsa Mbogne, David Jaures; Thron, Christopher
2015-11-01
In this paper we propose and study a spatial diffusion model for the control of anthracnose disease in a bounded domain. The model is a generalization of the one previously developed in [15]. We use the model to simulate two different types of control strategies against anthracnose disease. Strategies that employ chemical fungicides are modeled using a continuous control function; while strategies that rely on cultivational practices (such as pruning and removal of mummified fruits) are modeled with a control function which is discrete in time (though not in space). For comparative purposes, we perform our analyses for a spatially-averaged model as well as the space-dependent diffusion model. Under weak smoothness conditions on parameters we demonstrate the well-posedness of both models by verifying existence and uniqueness of the solution for the growth inhibition rate for given initial conditions. We also show that the set [0, 1] is positively invariant. We first study control by impulsive strategies, then analyze the simultaneous use of mixed continuous and pulse strategies. In each case we specify a cost functional to be minimized, and we demonstrate the existence of optimal control strategies. In the case of pulse-only strategies, we provide explicit algorithms for finding the optimal control strategies for both the spatially-averaged model and the space-dependent model. We verify the algorithms for both models via simulation, and discuss properties of the optimal solutions. Copyright © 2015 Elsevier Inc. All rights reserved.
Noise-dependent optimal strategies for quantum metrology
Huang, Zixin; Macchiavello, Chiara; Maccone, Lorenzo
2018-03-01
For phase estimation using qubits, we show that for some noise channels, the optimal entanglement-assisted strategy depends on the noise level. We note that there is a nontrivial crossover between the parallel-entangled strategy and the ancilla-assisted strategy: in the former the probes are all entangled; in the latter the probes are entangled with a noiseless ancilla but not among themselves. The transition can be explained by the fact that separable states are more robust against noise and therefore are optimal in the high-noise limit, but they are in turn outperformed by ancilla-assisted ones.
Optimal Inspection and Maintenance Strategies for Structural Systems
DEFF Research Database (Denmark)
Sommer, A. M.
The aim of this thesis is to give an overview of conventional and optimal reliability-based inspection and maintenance strategies and to examine for specific structures how the cost can be reduced and/or the safety can be improved by using optimal reliability-based inspection strategies....... For structures with several almost similar components it is suggested that individual inspection strategies should be determined for each component or a group of components based on the reliability of the actual component. The benefit of this procedure is assessed in connection with the structures considered....... Furthermore, in relation to the calculations performed the intention is to modify an existing program for determination of optimal inspection strategies. The main purpose of inspection and maintenance of structural systems is to prevent or delay damage or deterioration to protect people, environment...
Soil sampling strategies: Evaluation of different approaches
Energy Technology Data Exchange (ETDEWEB)
De Zorzi, Paolo [Agenzia per la Protezione dell' Ambiente e per i Servizi Tecnici (APAT), Servizio Metrologia Ambientale, Via di Castel Romano, 100-00128 Roma (Italy)], E-mail: paolo.dezorzi@apat.it; Barbizzi, Sabrina; Belli, Maria [Agenzia per la Protezione dell' Ambiente e per i Servizi Tecnici (APAT), Servizio Metrologia Ambientale, Via di Castel Romano, 100-00128 Roma (Italy); Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia [Agenzia Regionale per la Prevenzione e Protezione dell' Ambiente del Veneto, ARPA Veneto, U.O. Centro Qualita Dati, Via Spalato, 14-36045 Vicenza (Italy)
2008-11-15
The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2{sigma}, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies.
Soil sampling strategies: Evaluation of different approaches
International Nuclear Information System (INIS)
De Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia
2008-01-01
The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2σ, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies
Soil sampling strategies: evaluation of different approaches.
de Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia
2008-11-01
The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2sigma, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies.
Limited-sampling strategies for anti-infective agents: systematic review.
Sprague, Denise A; Ensom, Mary H H
2009-09-01
Area under the concentration-time curve (AUC) is a pharmacokinetic parameter that represents overall exposure to a drug. For selected anti-infective agents, pharmacokinetic-pharmacodynamic parameters, such as AUC/MIC (where MIC is the minimal inhibitory concentration), have been correlated with outcome in a few studies. A limited-sampling strategy may be used to estimate pharmacokinetic parameters such as AUC, without the frequent, costly, and inconvenient blood sampling that would be required to directly calculate the AUC. To discuss, by means of a systematic review, the strengths, limitations, and clinical implications of published studies involving a limited-sampling strategy for anti-infective agents and to propose improvements in methodology for future studies. The PubMed and EMBASE databases were searched using the terms "anti-infective agents", "limited sampling", "optimal sampling", "sparse sampling", "AUC monitoring", "abbreviated AUC", "abbreviated sampling", and "Bayesian". The reference lists of retrieved articles were searched manually. Included studies were classified according to modified criteria from the US Preventive Services Task Force. Twenty studies met the inclusion criteria. Six of the studies (involving didanosine, zidovudine, nevirapine, ciprofloxacin, efavirenz, and nelfinavir) were classified as providing level I evidence, 4 studies (involving vancomycin, didanosine, lamivudine, and lopinavir-ritonavir) provided level II-1 evidence, 2 studies (involving saquinavir and ceftazidime) provided level II-2 evidence, and 8 studies (involving ciprofloxacin, nelfinavir, vancomycin, ceftazidime, ganciclovir, pyrazinamide, meropenem, and alpha interferon) provided level III evidence. All of the studies providing level I evidence used prospectively collected data and proper validation procedures with separate, randomly selected index and validation groups. However, most of the included studies did not provide an adequate description of the methods or
Sampling optimization for printer characterization by direct search.
Bianco, Simone; Schettini, Raimondo
2012-12-01
Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.
Impact of sampling strategy on stream load estimates in till landscape of the Midwest
Vidon, P.; Hubbard, L.E.; Soyeux, E.
2009-01-01
Accurately estimating various solute loads in streams during storms is critical to accurately determine maximum daily loads for regulatory purposes. This study investigates the impact of sampling strategy on solute load estimates in streams in the US Midwest. Three different solute types (nitrate, magnesium, and dissolved organic carbon (DOC)) and three sampling strategies are assessed. Regardless of the method, the average error on nitrate loads is higher than for magnesium or DOC loads, and all three methods generally underestimate DOC loads and overestimate magnesium loads. Increasing sampling frequency only slightly improves the accuracy of solute load estimates but generally improves the precision of load calculations. This type of investigation is critical for water management and environmental assessment so error on solute load calculations can be taken into account by landscape managers, and sampling strategies optimized as a function of monitoring objectives. ?? 2008 Springer Science+Business Media B.V.
Optimal Advance Selling Strategy under Price Commitment
Chenhang Zeng
2012-01-01
This paper considers a two-period model with experienced consumers and inexperienced consumers. The retailer determines both advance selling price and regular selling price at the beginning of the first period. I show that advance selling weekly dominates no advance selling, and the optimal advance selling price may be at a discount, at a premium or at the regular selling price. To help the retailer choose the optimal pricing strategy, conditions for each possible advance selling strategy to ...
Adaptive sampling strategies with high-throughput molecular dynamics
Clementi, Cecilia
Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.
Tank Waste Remediation System optimized processing strategy
International Nuclear Information System (INIS)
Slaathaug, E.J.; Boldt, A.L.; Boomer, K.D.; Galbraith, J.D.; Leach, C.E.; Waldo, T.L.
1996-03-01
This report provides an alternative strategy evolved from the current Hanford Site Tank Waste Remediation System (TWRS) programmatic baseline for accomplishing the treatment and disposal of the Hanford Site tank wastes. This optimized processing strategy performs the major elements of the TWRS Program, but modifies the deployment of selected treatment technologies to reduce the program cost. The present program for development of waste retrieval, pretreatment, and vitrification technologies continues, but the optimized processing strategy reuses a single facility to accomplish the separations/low-activity waste (LAW) vitrification and the high-level waste (HLW) vitrification processes sequentially, thereby eliminating the need for a separate HLW vitrification facility
Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z.; Schlein, Alexandra N.; Hooker, Jonathan C.; Dehkordy, Soudabeh Fazeli; Hamilton, Gavin; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.
2017-01-01
BACKGROUND Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. PURPOSE To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. STUDY TYPE Retrospective secondary analysis of prospectively acquired clinical research data. POPULATION A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. FIELD STRENGTH/SEQUENCE Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradientrecalled echo technique. ASSESSMENT An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. STATISTICAL TESTING Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland–Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland–Altman analyses. RESULTS The study population’s mean whole-liver PDFF was 10.1±8.9% (range: 1.1–44.1%). Although there was no significant difference in average segmental (P=0.452) or lobar (P=0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥ 4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. DATA CONCLUSION Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. Level of
Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z; Schlein, Alexandra N; Hooker, Jonathan C; Fazeli Dehkordy, Soudabeh; Hamilton, Gavin; Reeder, Scott B; Loomba, Rohit; Sirlin, Claude B
2018-04-01
Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. Retrospective secondary analysis of prospectively acquired clinical research data. A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradient-recalled echo technique. An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland-Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland-Altman analyses. The study population's mean whole-liver PDFF was 10.1 ± 8.9% (range: 1.1-44.1%). Although there was no significant difference in average segmental (P = 0.452) or lobar (P = 0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:988-994. © 2017 International Society for Magnetic Resonance
Optimal Strategy and Business Models
DEFF Research Database (Denmark)
Johnson, Peter; Foss, Nicolai Juul
2016-01-01
This study picks up on earlier suggestions that control theory may further the study of strategy. Strategy can be formally interpreted as an idealized path optimizing heterogeneous resource deployment to produce maximum financial gain. Using standard matrix methods to describe the firm Hamiltonia...... variable of firm path, suggesting in turn that the firm's business model is the codification of the application of investment resources used to control the strategic path of value realization....
Optimal Deterministic Investment Strategies for Insurers
Directory of Open Access Journals (Sweden)
Ulrich Rieder
2013-11-01
Full Text Available We consider an insurance company whose risk reserve is given by a Brownian motion with drift and which is able to invest the money into a Black–Scholes financial market. As optimization criteria, we treat mean-variance problems, problems with other risk measures, exponential utility and the probability of ruin. Following recent research, we assume that investment strategies have to be deterministic. This leads to deterministic control problems, which are quite easy to solve. Moreover, it turns out that there are some interesting links between the optimal investment strategies of these problems. Finally, we also show that this approach works in the Lévy process framework.
Optimization Under Uncertainty for Wake Steering Strategies: Preprint
Energy Technology Data Exchange (ETDEWEB)
Quick, Julian [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Annoni, Jennifer [National Renewable Energy Laboratory (NREL), Golden, CO (United States); King, Ryan N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dykes, Katherine L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Fleming, Paul A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ning, Andrew [Brigham Young University
2017-05-01
Wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as 'wake steering,' in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in the presence of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.
Switching strategies to optimize search
International Nuclear Information System (INIS)
Shlesinger, Michael F
2016-01-01
Search strategies are explored when the search time is fixed, success is probabilistic and the estimate for success can diminish with time if there is not a successful result. Under the time constraint the problem is to find the optimal time to switch a search strategy or search location. Several variables are taken into account, including cost, gain, rate of success if a target is present and the probability that a target is present. (paper: interdisciplinary statistical mechanics)
Optimal sampling designs for large-scale fishery sample surveys in Greece
Directory of Open Access Journals (Sweden)
G. BAZIGOS
2007-12-01
The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.
Developing an Integrated Design Strategy for Chip Layout Optimization
Wits, Wessel Willems; Jauregui Becker, Juan Manuel; van Vliet, Frank Edward; te Riele, G.J.
2011-01-01
This paper presents an integrated design strategy for chip layout optimization. The strategy couples both electric and thermal aspects during the conceptual design phase to improve chip performances; thermal management being one of the major topics. The layout of the chip circuitry is optimized
Energy Technology Data Exchange (ETDEWEB)
Stemkens, Bjorn, E-mail: b.stemkens@umcutrecht.nl [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Tijssen, Rob H.N. [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Senneville, Baudouin D. de [Imaging Division, University Medical Center Utrecht, Utrecht (Netherlands); L' Institut de Mathématiques de Bordeaux, Unité Mixte de Recherche 5251, Centre National de la Recherche Scientifique/University of Bordeaux, Bordeaux (France); Heerkens, Hanne D.; Vulpen, Marco van; Lagendijk, Jan J.W.; Berg, Cornelis A.T. van den [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands)
2015-03-01
Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.
Spent nuclear fuel sampling strategy
International Nuclear Information System (INIS)
Bergmann, D.W.
1995-01-01
This report proposes a strategy for sampling the spent nuclear fuel (SNF) stored in the 105-K Basins (105-K East and 105-K West). This strategy will support decisions concerning the path forward SNF disposition efforts in the following areas: (1) SNF isolation activities such as repackaging/overpacking to a newly constructed staging facility; (2) conditioning processes for fuel stabilization; and (3) interim storage options. This strategy was developed without following the Data Quality Objective (DQO) methodology. It is, however, intended to augment the SNF project DQOS. The SNF sampling is derived by evaluating the current storage condition of the SNF and the factors that effected SNF corrosion/degradation
Optimal Stochastic Advertising Strategies for the U.S. Beef Industry
Kun C. Lee; Stanley Schraufnagel; Earl O. Heady
1982-01-01
An important decision variable in the promotional strategy for the beef sector is the optimal level of advertising expenditures over time. Optimal stochastic and deterministic advertising expenditures are derived for the U.S. beef industry for the period `1966 through 1980. They are compared with historical levels and gains realized by optimal advertising strategies are measured. Finally, the optimal advertising expenditures in the future are forecasted.
Emergency strategy optimization for the environmental control system in manned spacecraft
Li, Guoxiang; Pang, Liping; Liu, Meng; Fang, Yufeng; Zhang, Helin
2018-02-01
It is very important for a manned environmental control system (ECS) to be able to reconfigure its operation strategy in emergency conditions. In this article, a multi-objective optimization is established to design the optimal emergency strategy for an ECS in an insufficient power supply condition. The maximum ECS lifetime and the minimum power consumption are chosen as the optimization objectives. Some adjustable key variables are chosen as the optimization variables, which finally represent the reconfigured emergency strategy. The non-dominated sorting genetic algorithm-II is adopted to solve this multi-objective optimization problem. Optimization processes are conducted at four different carbon dioxide partial pressure control levels. The study results show that the Pareto-optimal frontiers obtained from this multi-objective optimization can represent the relationship between the lifetime and the power consumption of the ECS. Hence, the preferred emergency operation strategy can be recommended for situations when there is suddenly insufficient power.
Optimization strategies for complex engineering applications
Energy Technology Data Exchange (ETDEWEB)
Eldred, M.S.
1998-02-01
LDRD research activities have focused on increasing the robustness and efficiency of optimization studies for computationally complex engineering problems. Engineering applications can be characterized by extreme computational expense, lack of gradient information, discrete parameters, non-converging simulations, and nonsmooth, multimodal, and discontinuous response variations. Guided by these challenges, the LDRD research activities have developed application-specific techniques, fundamental optimization algorithms, multilevel hybrid and sequential approximate optimization strategies, parallel processing approaches, and automatic differentiation and adjoint augmentation methods. This report surveys these activities and summarizes the key findings and recommendations.
Energy Technology Data Exchange (ETDEWEB)
Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.
Multi-Objective Optimization of Start-up Strategy for Pumped Storage Units
Directory of Open Access Journals (Sweden)
Jinjiao Hou
2018-05-01
Full Text Available This paper proposes a multi-objective optimization method for the start-up strategy of pumped storage units (PSU for the first time. In the multi-objective optimization method, the speed rise time and the overshoot during the process of the start-up are taken as the objectives. A precise simulation platform is built for simulating the transient process of start-up, and for calculating the objectives based on the process. The Multi-objective Particle Swarm Optimization algorithm (MOPSO is adopted to optimize the widely applied start-up strategies based on one-stage direct guide vane control (DGVC, and two-stage DGVC. Based on the Pareto Front obtained, a multi-objective decision-making method based on the relative objective proximity is used to sort the solutions in the Pareto Front. Start-up strategy optimization for a PSU of a pumped storage power station in Jiangxi Province in China is conducted in experiments. The results show that: (1 compared with the single objective optimization, the proposed multi-objective optimization of start-up strategy not only greatly shortens the speed rise time and the speed overshoot, but also makes the speed curve quickly stabilize; (2 multi-objective optimization of strategy based on two-stage DGVC achieves better solution for a quick and smooth start-up of PSU than that of the strategy based on one-stage DGVC.
Optimal sampling schemes applied in geology
CSIR Research Space (South Africa)
Debba, Pravesh
2010-05-01
Full Text Available Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology UP 2010 2 / 47 Outline 1 Introduction to hyperspectral remote... sensing 2 Objective of Study 1 3 Study Area 4 Data used 5 Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology...
Mixed integer evolution strategies for parameter optimization.
Li, Rui; Emmerich, Michael T M; Eggermont, Jeroen; Bäck, Thomas; Schütz, M; Dijkstra, J; Reiber, J H C
2013-01-01
Evolution strategies (ESs) are powerful probabilistic search and optimization algorithms gleaned from biological evolution theory. They have been successfully applied to a wide range of real world applications. The modern ESs are mainly designed for solving continuous parameter optimization problems. Their ability to adapt the parameters of the multivariate normal distribution used for mutation during the optimization run makes them well suited for this domain. In this article we describe and study mixed integer evolution strategies (MIES), which are natural extensions of ES for mixed integer optimization problems. MIES can deal with parameter vectors consisting not only of continuous variables but also with nominal discrete and integer variables. Following the design principles of the canonical evolution strategies, they use specialized mutation operators tailored for the aforementioned mixed parameter classes. For each type of variable, the choice of mutation operators is governed by a natural metric for this variable type, maximal entropy, and symmetry considerations. All distributions used for mutation can be controlled in their shape by means of scaling parameters, allowing self-adaptation to be implemented. After introducing and motivating the conceptual design of the MIES, we study the optimality of the self-adaptation of step sizes and mutation rates on a generalized (weighted) sphere model. Moreover, we prove global convergence of the MIES on a very general class of problems. The remainder of the article is devoted to performance studies on artificial landscapes (barrier functions and mixed integer NK landscapes), and a case study in the optimization of medical image analysis systems. In addition, we show that with proper constraint handling techniques, MIES can also be applied to classical mixed integer nonlinear programming problems.
Provencher, Véronique; Desrosiers, Johanne; Demers, Louise; Carmichael, Pierre-Hugues
2016-01-01
This study aimed to (1) determine the categories of behavioral coping strategies most strongly correlated with optimal seniors' social participation in different activity and role domains and (2) identify the demographic, health and environmental factors associated with the use of these coping strategies optimizing social participation. The sample consisted of 350 randomly recruited community-dwelling older adults (≥65 years). Coping strategies and social participation were measured, respectively, using the Inventory of Coping Strategies Used by the Elderly and Assessment of Life Habits questionnaires. Information about demographic, health and environmental factors was also collected during the interview. Regression analyses showed a strong relationship between the use of cooking- and transportation-related coping strategies and optimal participation in the domains of nutrition and community life, respectively. Older age and living alone were associated with increased use of cooking-related strategies, while good self-rated health and not living in a seniors' residence were correlated with greater use of transportation-related strategies. Our study helped to identify useful behavioral coping strategies that should be incorporated in disability prevention programs designed to promote community-dwelling seniors' social participation. However, the appropriateness of these strategies depends on whether they are used in relevant contexts and tailored to specific needs. Our results support the relevance of including behavioral coping strategies related to cooking and transportation in disability prevention programs designed to promote community-dwelling seniors' social participation in the domains of nutrition and community life, respectively. Older age and living alone were associated with increased use of cooking-related strategies, while good self-rated health and not living in a seniors' residence were correlated with greater use of transportation
A strategy for optimizing item-pool management
Ariel, A.; van der Linden, Willem J.; Veldkamp, Bernard P.
2006-01-01
Item-pool management requires a balancing act between the input of new items into the pool and the output of tests assembled from it. A strategy for optimizing item-pool management is presented that is based on the idea of a periodic update of an optimal blueprint for the item pool to tune item
Optimal knockout strategies in genome-scale metabolic networks using particle swarm optimization.
Nair, Govind; Jungreuthmayer, Christian; Zanghellini, Jürgen
2017-02-01
Knockout strategies, particularly the concept of constrained minimal cut sets (cMCSs), are an important part of the arsenal of tools used in manipulating metabolic networks. Given a specific design, cMCSs can be calculated even in genome-scale networks. We would however like to find not only the optimal intervention strategy for a given design but the best possible design too. Our solution (PSOMCS) is to use particle swarm optimization (PSO) along with the direct calculation of cMCSs from the stoichiometric matrix to obtain optimal designs satisfying multiple objectives. To illustrate the working of PSOMCS, we apply it to a toy network. Next we show its superiority by comparing its performance against other comparable methods on a medium sized E. coli core metabolic network. PSOMCS not only finds solutions comparable to previously published results but also it is orders of magnitude faster. Finally, we use PSOMCS to predict knockouts satisfying multiple objectives in a genome-scale metabolic model of E. coli and compare it with OptKnock and RobustKnock. PSOMCS finds competitive knockout strategies and designs compared to other current methods and is in some cases significantly faster. It can be used in identifying knockouts which will force optimal desired behaviors in large and genome scale metabolic networks. It will be even more useful as larger metabolic models of industrially relevant organisms become available.
Gradient Material Strategies for Hydrogel Optimization in Tissue Engineering Applications
2018-01-01
Although a number of combinatorial/high-throughput approaches have been developed for biomaterial hydrogel optimization, a gradient sample approach is particularly well suited to identify hydrogel property thresholds that alter cellular behavior in response to interacting with the hydrogel due to reduced variation in material preparation and the ability to screen biological response over a range instead of discrete samples each containing only one condition. This review highlights recent work on cell–hydrogel interactions using a gradient material sample approach. Fabrication strategies for composition, material and mechanical property, and bioactive signaling gradient hydrogels that can be used to examine cell–hydrogel interactions will be discussed. The effects of gradients in hydrogel samples on cellular adhesion, migration, proliferation, and differentiation will then be examined, providing an assessment of the current state of the field and the potential of wider use of the gradient sample approach to accelerate our understanding of matrices on cellular behavior. PMID:29485612
Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time
Daheng Peng; Fang Zhang
2017-01-01
In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.
Optimal generator bidding strategies for power and ancillary services
Morinec, Allen G.
As the electric power industry transitions to a deregulated market, power transactions are made upon price rather than cost. Generator companies are interested in maximizing their profits rather than overall system efficiency. A method to equitably compensate generation providers for real power, and ancillary services such as reactive power and spinning reserve, will ensure a competitive market with an adequate number of suppliers. Optimizing the generation product mix during bidding is necessary to maximize a generator company's profits. The objective of this research work is to determine and formulate appropriate optimal bidding strategies for a generation company in both the energy and ancillary services markets. These strategies should incorporate the capability curves of their generators as constraints to define the optimal product mix and price offered in the day-ahead and real time spot markets. In order to achieve such a goal, a two-player model was composed to simulate market auctions for power generation. A dynamic game methodology was developed to identify Nash Equilibria and Mixed-Strategy Nash Equilibria solutions as optimal generation bidding strategies for two-player non-cooperative variable-sum matrix games with incomplete information. These games integrated the generation product mix of real power, reactive power, and spinning reserve with the generators's capability curves as constraints. The research includes simulations of market auctions, where strategies were tested for generators with different unit constraints, costs, types of competitors, strategies, and demand levels. Studies on the capability of large hydrogen cooled synchronous generators were utilized to derive useful equations that define the exact shape of the capability curve from the intersections of the arcs defined by the centers and radial vectors of the rotor, stator, and steady-state stability limits. The available reactive reserve and spinning reserve were calculated given a
User-driven sampling strategies in image exploitation
Harvey, Neal; Porter, Reid
2013-12-01
Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-driven sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. User-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. In preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.
Intelligent fault recognition strategy based on adaptive optimized multiple centers
Zheng, Bo; Li, Yan-Feng; Huang, Hong-Zhong
2018-06-01
For the recognition principle based optimized single center, one important issue is that the data with nonlinear separatrix cannot be recognized accurately. In order to solve this problem, a novel recognition strategy based on adaptive optimized multiple centers is proposed in this paper. This strategy recognizes the data sets with nonlinear separatrix by the multiple centers. Meanwhile, the priority levels are introduced into the multi-objective optimization, including recognition accuracy, the quantity of optimized centers, and distance relationship. According to the characteristics of various data, the priority levels are adjusted to ensure the quantity of optimized centers adaptively and to keep the original accuracy. The proposed method is compared with other methods, including support vector machine (SVM), neural network, and Bayesian classifier. The results demonstrate that the proposed strategy has the same or even better recognition ability on different distribution characteristics of data.
Optimal fuel inventory strategies
International Nuclear Information System (INIS)
Caspary, P.J.; Hollibaugh, J.B.; Licklider, P.L.; Patel, K.P.
1990-01-01
In an effort to maintain their competitive edge, most utilities are reevaluating many of their conventional practices and policies in an effort to further minimize customer revenue requirements without sacrificing system reliability. Over the past several years, Illinois Power has been rethinking its traditional fuel inventory strategies, recognizing that coal supplies are competitive and plentiful and that carrying charges on inventory are expensive. To help the Company achieve one of its strategic corporate goals, an optimal fuel inventory study was performed for its five major coal-fired generating stations. The purpose of this paper is to briefly describe Illinois Power's system and past practices concerning coal inventories, highlight the analytical process behind the optimal fuel inventory study, and discuss some of the recent experiences affecting coal deliveries and economic dispatch
International Nuclear Information System (INIS)
Wang, Xinli; Cai, Wenjian; Lu, Jiangang; Sun, Youxian; Zhao, Lei
2015-01-01
This study presents a model-based optimization strategy for an actual chiller driven dehumidifier of liquid desiccant dehumidification system operating with lithium chloride solution. By analyzing the characteristics of the components, energy predictive models for the components in the dehumidifier are developed. To minimize the energy usage while maintaining the outlet air conditions at the pre-specified set-points, an optimization problem is formulated with an objective function, the constraints of mechanical limitations and components interactions. Model-based optimization strategy using genetic algorithm is proposed to obtain the optimal set-points for desiccant solution temperature and flow rate, to minimize the energy usage in the dehumidifier. Experimental studies on an actual system are carried out to compare energy consumption between the proposed optimization and the conventional strategies. The results demonstrate that energy consumption using the proposed optimization strategy can be reduced by 12.2% in the dehumidifier operation. - Highlights: • Present a model-based optimization strategy for energy saving in LDDS. • Energy predictive models for components in dehumidifier are developed. • The Optimization strategy are applied and tested in an actual LDDS. • Optimization strategy can achieve energy savings by 12% during operation
Growth or reproduction: emergence of an evolutionary optimal strategy
International Nuclear Information System (INIS)
Grilli, J; Suweis, S; Maritan, A
2013-01-01
Modern ecology has re-emphasized the need for a quantitative understanding of the original ‘survival of the fittest theme’ based on analysis of the intricate trade-offs between competing evolutionary strategies that characterize the evolution of life. This is key to the understanding of species coexistence and ecosystem diversity under the omnipresent constraint of limited resources. In this work we propose an agent-based model replicating a community of interacting individuals, e.g. plants in a forest, where all are competing for the same finite amount of resources and each competitor is characterized by a specific growth–reproduction strategy. We show that such an evolution dynamics drives the system towards a stationary state characterized by an emergent optimal strategy, which in turn depends on the amount of available resources the ecosystem can rely on. We find that the share of resources used by individuals is power-law distributed with an exponent directly related to the optimal strategy. The model can be further generalized to devise optimal strategies in social and economical interacting systems dynamics. (paper)
Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time
Directory of Open Access Journals (Sweden)
Daheng Peng
2017-10-01
Full Text Available In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.
Evaluation of sampling strategies to estimate crown biomass
Directory of Open Access Journals (Sweden)
Krishna P Poudel
2015-01-01
Full Text Available Background Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire modeling. However, crown biomass is difficult to predict because of the variability within and among species and sites. Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies. In this study, we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass. Methods Using data collected from 20 destructively sampled trees, we evaluated 11 different sampling strategies using six evaluation statistics: bias, relative bias, root mean square error (RMSE, relative RMSE, amount of biomass sampled, and relative biomass sampled. We also evaluated the performance of the selected sampling strategies when different numbers of branches (3, 6, 9, and 12 are selected from each tree. Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass. Results Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled. However, the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled. Under the stratified sampling strategy, selecting unequal number of branches per stratum produced approximately similar results to simple random sampling, but it further decreased RMSE when information on branch diameter is used in the design and estimation phases. Conclusions Use of
Optimal intermittent search strategies
International Nuclear Information System (INIS)
Rojo, F; Budde, C E; Wio, H S
2009-01-01
We study the search kinetics of a single fixed target by a set of searchers performing an intermittent random walk, jumping between different internal states. Exploiting concepts of multi-state and continuous-time random walks we have calculated the survival probability of a target up to time t, and have 'optimized' (minimized) it with regard to the transition probability among internal states. Our model shows that intermittent strategies always improve target detection, even for simple diffusion states of motion
Optimal time points sampling in pathway modelling.
Hu, Shiyan
2004-01-01
Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.
Peng, Ting; Sun, Xiaochun; Mumm, Rita H
2014-01-01
as well as seed chipping and tissue sampling approaches to facilitate genotyping. With selfing approaches, two generations of selfing rather than one for trait fixation (i.e. 'F2 enrichment' as per Bonnett et al. in Strategies for efficient implementation of molecular markers in wheat breeding. Mol Breed 15:75-85, 2005) were utilized to eliminate bottlenecking due to extremely low frequencies of desired genotypes in the population. The efficiency indicators such as total number of plants grown across generations, total number of marker data points, total number of generations, number of seeds sampled by seed chipping, number of plants requiring tissue sampling, and number of pollinations (i.e. selfing and crossing) were considered in comparisons of breeding strategies. A breeding strategy involving seed chipping and a two-generation selfing approach (SC + SELF) was determined to be the most efficient breeding strategy in terms of time to market and resource requirements. Doubled haploidy may have limited utility in trait fixation for MTI under the defined breeding scenario. This outcome paves the way for optimizing the last step in the MTI process, version testing, which involves hybridization of female and male RP conversions to create versions of the converted hybrid for performance evaluation and possible commercial release.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Optimal intermittent search strategies
Energy Technology Data Exchange (ETDEWEB)
Rojo, F; Budde, C E [FaMAF, Universidad Nacional de Cordoba, Ciudad Universitaria, X5000HUA Cordoba (Argentina); Wio, H S [Instituto de Fisica de Cantabria, Universidad de Cantabria and CSIC E-39005 Santander (Spain)
2009-03-27
We study the search kinetics of a single fixed target by a set of searchers performing an intermittent random walk, jumping between different internal states. Exploiting concepts of multi-state and continuous-time random walks we have calculated the survival probability of a target up to time t, and have 'optimized' (minimized) it with regard to the transition probability among internal states. Our model shows that intermittent strategies always improve target detection, even for simple diffusion states of motion.
Dispositional optimism and coping strategies in patients with a kidney transplant.
Costa-Requena, Gemma; Cantarell-Aixendri, M Carmen; Parramon-Puig, Gemma; Serón-Micas, Daniel
2014-01-01
Dispositional optimism is a personal resource that determines the coping style and adaptive response to chronic diseases. The aim of this study was to assess the correlations between dispositional optimism and coping strategies in patients with recent kidney transplantation and evaluate the differences in the use of coping strategies in accordance with the level of dispositional optimism. Patients who were hospitalised in the nephrology department were selected consecutively after kidney transplantation was performed. The evaluation instruments were the Life Orientation Test-Revised, and the Coping Strategies Inventory. The data were analysed with central tendency measures, correlation analyses and means were compared using Student’s t-test. 66 patients with a kidney transplant participated in the study. The coping styles that characterised patients with a recent kidney transplantation were Social withdrawal and Problem avoidance. Correlations between dispositional optimism and coping strategies were significant in a positive direction in Problem-solving (p<.05) and Cognitive restructuring (p<.01), and inversely with Self-criticism (p<.05). Differences in dispositional optimism created significant differences in the Self-Criticism dimension (t=2.58; p<.01). Dispositional optimism scores provide differences in coping responses after kidney transplantation. Moreover, coping strategies may influence the patient’s perception of emotional wellbeing after kidney transplantation.
Lampa, Erik G; Nilsson, Leif; Liljelind, Ingrid E; Bergdahl, Ingvar A
2006-06-01
When assessing occupational exposures, repeated measurements are in most cases required. Repeated measurements are more resource intensive than a single measurement, so careful planning of the measurement strategy is necessary to assure that resources are spent wisely. The optimal strategy depends on the objectives of the measurements. Here, two different models of random effects analysis of variance (ANOVA) are proposed for the optimization of measurement strategies by the minimization of the variance of the estimated log-transformed arithmetic mean value of a worker group, i.e. the strategies are optimized for precise estimation of that value. The first model is a one-way random effects ANOVA model. For that model it is shown that the best precision in the estimated mean value is always obtained by including as many workers as possible in the sample while restricting the number of replicates to two or at most three regardless of the size of the variance components. The second model introduces the 'shared temporal variation' which accounts for those random temporal fluctuations of the exposure that the workers have in common. It is shown for that model that the optimal sample allocation depends on the relative sizes of the between-worker component and the shared temporal component, so that if the between-worker component is larger than the shared temporal component more workers should be included in the sample and vice versa. The results are illustrated graphically with an example from the reinforced plastics industry. If there exists a shared temporal variation at a workplace, that variability needs to be accounted for in the sampling design and the more complex model is recommended.
Optimal energy management strategy for battery powered electric vehicles
International Nuclear Information System (INIS)
Xi, Jiaqi; Li, Mian; Xu, Min
2014-01-01
Highlights: • The power usage for battery-powered electrical vehicles with in-wheel motors is maximized. • The battery and motor dynamics are examined emphasized on the power conversion and utilization. • The optimal control strategy is derived and verified by simulations. • An analytic expression of the optimal operating point is obtained. - Abstract: Due to limited energy density of batteries, energy management has always played a critical role in improving the overall energy efficiency of electric vehicles. In this paper, a key issue within the energy management problem will be carefully tackled, i.e., maximizing the power usage of batteries for battery-powered electrical vehicles with in-wheel motors. To this end, the battery and motor dynamics will be thoroughly examined with particular emphasis on the power conversion and power utilization. The optimal control strategy will then be derived based on the analysis. One significant contribution of this work is that an analytic expression for the optimal operating point in terms of the component and environment parameters can be obtained. Owing to this finding, the derived control strategy is also rendered a simple structure for real-time implementation. Simulation results demonstrate that the proposed strategy works both adaptively and robustly under different driving scenarios
Improved quantum-behaved particle swarm optimization with local search strategy
Directory of Open Access Journals (Sweden)
Maolong Xi
2017-03-01
Full Text Available Quantum-behaved particle swarm optimization, which was motivated by analysis of particle swarm optimization and quantum system, has shown compared performance in finding the optimal solutions for many optimization problems to other evolutionary algorithms. To address the problem of premature, a local search strategy is proposed to improve the performance of quantum-behaved particle swarm optimization. In proposed local search strategy, a super particle is presented which is a collection body of randomly selected particles’ dimension information in the swarm. The selected probability of particles in swarm is different and determined by their fitness values. To minimization problems, the fitness value of one particle is smaller; the selected probability is more and will contribute more information in constructing the super particle. In addition, in order to investigate the influence on algorithm performance with different local search space, four methods of computing the local search radius are applied in local search strategy and propose four variants of local search quantum-behaved particle swarm optimization. Empirical studies on a suite of well-known benchmark functions are undertaken in order to make an overall performance comparison among the proposed methods and other quantum-behaved particle swarm optimization. The simulation results show that the proposed quantum-behaved particle swarm optimization variants have better advantages over the original quantum-behaved particle swarm optimization.
Validation of optimization strategies using the linear structured production chains
Kusiak, Jan; Morkisz, Paweł; Oprocha, Piotr; Pietrucha, Wojciech; Sztangret, Łukasz
2017-06-01
Different optimization strategies applied to sequence of several stages of production chains were validated in this paper. Two benchmark problems described by ordinary differential equations (ODEs) were considered. A water tank and a passive CR-RC filter were used as the exemplary objects described by the first and the second order differential equations, respectively. Considered in the work optimization problems serve as the validators of strategies elaborated by the Authors. However, the main goal of research is selection of the best strategy for optimization of two real metallurgical processes which will be investigated in an on-going projects. The first problem will be the oxidizing roasting process of zinc sulphide concentrate where the sulphur from the input concentrate should be eliminated and the minimal concentration of sulphide sulphur in the roasted products has to be achieved. Second problem will be the lead refining process consisting of three stages: roasting to the oxide, oxide reduction to metal and the oxidizing refining. Strategies, which appear the most effective in considered benchmark problems will be candidates for optimization of the mentioned above industrial processes.
Optimization of refueling-shuffling scheme in PWR core by random search strategy
International Nuclear Information System (INIS)
Wu Yuan
1991-11-01
A random method for simulating optimization of refueling management in a pressurized water reactor (PWR) core is described. The main purpose of the optimization was to select the 'best' refueling arrangement scheme which would produce maximum economic benefits under certain imposed conditions. To fulfill this goal, an effective optimization strategy, two-stage random search method was developed. First, the search was made in a manner similar to the stratified sampling technique. A local optimum can be reached by comparison of the successive results. Then the other random experiences would be carried on between different strata to try to find the global optimum. In general, it can be used as a practical tool for conventional fuel management scheme. However, it can also be used in studies on optimization of Low-Leakage fuel management. Some calculations were done for a typical PWR core on a CYBER-180/830 computer. The results show that the method proposed can obtain satisfactory approach at reasonable low computational cost
Sampled-data and discrete-time H2 optimal control
Trentelman, Harry L.; Stoorvogel, Anton A.
1993-01-01
This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This
Validated sampling strategy for assessing contaminants in soil stockpiles
International Nuclear Information System (INIS)
Lame, Frank; Honders, Ton; Derksen, Giljam; Gadella, Michiel
2005-01-01
Dutch legislation on the reuse of soil requires a sampling strategy to determine the degree of contamination. This sampling strategy was developed in three stages. Its main aim is to obtain a single analytical result, representative of the true mean concentration of the soil stockpile. The development process started with an investigation into how sample pre-treatment could be used to obtain representative results from composite samples of heterogeneous soil stockpiles. Combining a large number of random increments allows stockpile heterogeneity to be fully represented in the sample. The resulting pre-treatment method was then combined with a theoretical approach to determine the necessary number of increments per composite sample. At the second stage, the sampling strategy was evaluated using computerised models of contaminant heterogeneity in soil stockpiles. The now theoretically based sampling strategy was implemented by the Netherlands Centre for Soil Treatment in 1995. It was applied to all types of soil stockpiles, ranging from clean to heavily contaminated, over a period of four years. This resulted in a database containing the analytical results of 2570 soil stockpiles. At the final stage these results were used for a thorough validation of the sampling strategy. It was concluded that the model approach has indeed resulted in a sampling strategy that achieves analytical results representative of the mean concentration of soil stockpiles. - A sampling strategy that ensures analytical results representative of the mean concentration in soil stockpiles is presented and validated
Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang
2017-07-10
In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.
Two-objective on-line optimization of supervisory control strategy
Energy Technology Data Exchange (ETDEWEB)
Nassif, N.; Kajl, S.; Sabourin, R. [Ecole de Technologie Superieure, Montreal (Canada)
2004-09-01
The set points of supervisory control strategy are optimized with respect to energy use and thermal comfort for existing HVAC systems. The set point values of zone temperatures, supply duct static pressure, and supply air temperature are the problem variables, while energy use and thermal comfort are the objective functions. The HVAC system model includes all the individual component models developed and validated against the monitored data of an existing VAV system. It serves to calculate energy use during the optimization process, whereas the actual energy use is determined by using monitoring data and the appropriate validated component models. A comparison, done for one summer week, of actual and optimal energy use shows that the on-line implementation of a genetic algorithm optimization program to determine the optimal set points of supervisory control strategy could save energy by 19.5%, while satisfying the minimum zone airflow rates and the thermal comfort. The results also indicate that the application of the two-objective optimization problem can help control daily energy use or daily building thermal comfort, thus saving more energy than the application of the one-objective optimization problem. (Author)
Optimal Pricing Strategy in Marketing Research Consulting.
Chang, Chun-Hao; Lee, Chi-Wen Jevons
1994-01-01
This paper studies the optimal pricing scheme for a monopolistic marketing research consultant who sells high-cost proprietary marketing information to her oligopolistic clients in the manufacturing industry. In designing an optimal pricing strategy, the consultant needs to fully consider the behavior of her clients, the behavior of the existing and potential competitors to her clients, and the behavior of her clients' customers. The authors show how the environment uncertainty, the capabilit...
Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon
2016-05-01
The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space. Copyright © 2016 John Wiley & Sons, Ltd.
A Particle Swarm Optimization Variant with an Inner Variable Learning Strategy
Directory of Open Access Journals (Sweden)
Guohua Wu
2014-01-01
Full Text Available Although Particle Swarm Optimization (PSO has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner variable learning strategy (PSO-IVL is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative relation. Based on this knowledge, the inner variable learning (IVL strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting evolutionary algorithms by problem-oriented domain knowledge.
Generating optimized stochastic power management strategies for electric car components
Energy Technology Data Exchange (ETDEWEB)
Fruth, Matthias [TraceTronic GmbH, Dresden (Germany); Bastian, Steve [Technische Univ. Dresden (Germany)
2012-11-01
With the increasing prevalence of electric vehicles, reducing the power consumption of car components becomes a necessity. For the example of a novel traffic-light assistance system, which makes speed recommendations based on the expected length of red-light phases, power-management strategies are used to control under which conditions radio communication, positioning systems and other components are switched to low-power (e.g. sleep) or high-power (e.g. idle/busy) states. We apply dynamic power management, an optimization technique well-known from other domains, in order to compute energy-optimal power-management strategies, sometimes resulting in these strategies being stochastic. On the example of the traffic-light assistant, we present a MATLAB/Simulink-implemented framework for the generation, simulation and formal analysis of optimized power-management strategies, which is based on this technique. We study capabilities and limitations of this approach and sketch further applications in the automotive domain. (orig.)
Optimal decentralized valley-filling charging strategy for electric vehicles
International Nuclear Information System (INIS)
Zhang, Kangkang; Xu, Liangfei; Ouyang, Minggao; Wang, Hewu; Lu, Languang; Li, Jianqiu; Li, Zhe
2014-01-01
Highlights: • An implementable charging strategy is developed for electric vehicles connected to a grid. • A two-dimensional pricing scheme is proposed to coordinate charging behaviors. • The strategy effectively works in decentralized way but achieves the systematic valley filling. • The strategy allows device-level charging autonomy, and does not require a bidirectional communication/control network. • The strategy can self-correct when confronted with adverse factors. - Abstract: Uncoordinated charging load of electric vehicles (EVs) increases the peak load of the power grid, thereby increasing the cost of electricity generation. The valley-filling charging scenario offers a cheaper alternative. This study proposes a novel decentralized valley-filling charging strategy, in which a day-ahead pricing scheme is designed by solving a minimum-cost optimization problem. The pricing scheme can be broadcasted to EV owners, and the individual charging behaviors can be indirectly coordinated. EV owners respond to the pricing scheme by autonomously optimizing their individual charge patterns. This device-level response induces a valley-filling effect in the grid at the system level. The proposed strategy offers three advantages: coordination (by the valley-filling effect), practicality (no requirement for a bidirectional communication/control network between the grid and EV owners), and autonomy (user control of EV charge patterns). The proposed strategy is validated in simulations of typical scenarios in Beijing, China. According to the results, the strategy (1) effectively achieves the valley-filling charging effect at 28% less generation cost than the uncoordinated charging strategy, (2) is robust to several potential affecters of the valley-filling effect, such as (system-level) inaccurate parameter estimation and (device-level) response capability and willingness (which cause less than 2% deviation in the minimal generation cost), and (3) is compatible with
International Nuclear Information System (INIS)
Gao, Jiajia; Huang, Gongsheng; Xu, Xinhua
2016-01-01
Highlights: • An optimization strategy for a small-scale air-conditioning system is developed. • The optimization strategy aims at optimizing the overall system energy consumption. • The strategy may guarantee the robust control of the space air temperature. • The performance of the optimization strategy was tested on a simulation platform. - Abstract: This paper studies the optimization of a small-scale central air-conditioning system, in which the cooling is provided by a ground source heat pump (GSHP) equipped with an on/off capacity control. The optimization strategy aims to optimize the overall system energy consumption and simultaneously guarantee the robustness of the space air temperature control without violating the allowed GSHP maximum start-ups number per hour specified by customers. The set-point of the chilled water return temperature and the width of the water temperature control band are used as the decision variables for the optimization. The performance of the proposed strategy was tested on a simulation platform. Results show that the optimization strategy can save the energy consumption by 9.59% in a typical spring day and 2.97% in a typical summer day. Meanwhile it is able to enhance the space air temperature control robustness when compared with a basic control strategy without optimization.
Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R
2016-04-15
A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.
Modeling of optimization strategies in the incremental CNC sheet metal forming process
International Nuclear Information System (INIS)
Bambach, M.; Hirt, G.; Ames, J.
2004-01-01
Incremental CNC sheet forming (ISF) is a relatively new sheet metal forming process for small batch production and prototyping. In ISF, a blank is shaped by the CNC movements of a simple tool in combination with a simplified die. The standard forming strategies in ISF entail two major drawbacks: (i) the inherent forming kinematics set limits on the maximum wall angle that can be formed with ISF. (ii) since elastic parts of the imposed deformation can currently not be accounted for in CNC code generation, the standard strategies can lead to undesired deviations between the target and the sample geometry.Several enhancements have recently been put forward to overcome the above limitations, among them a multistage forming strategy to manufacture steep flanges, and a correction algorithm to improve the geometric accuracy. Both strategies have been successful in improving the forming of simple parts. However, the high experimental effort to empirically optimize the tool paths motivates the use of process modeling techniques.This paper deals with finite element modeling of the ISF process. In particular, the outcome of different multistage strategies is modeled and compared to collated experimental results regarding aspects such as sheet thickness and the onset of wrinkling. Moreover, the feasibility of modeling the geometry of a part is investigated as this is of major importance with respect to optimizing the geometric accuracy. Experimental validation is achieved by optical deformation measurement that gives the local displacements and strains of the sheet during forming as benchmark quantities for the simulation
Artificial root foraging optimizer algorithm with hybrid strategies
Directory of Open Access Journals (Sweden)
Yang Liu
2017-02-01
Full Text Available In this work, a new plant-inspired optimization algorithm namely the hybrid artificial root foraging optimizion (HARFO is proposed, which mimics the iterative root foraging behaviors for complex optimization. In HARFO model, two innovative strategies were developed: one is the root-to-root communication strategy, which enables the individual exchange information with each other in different efficient topologies that can essentially improve the exploration ability; the other is co-evolution strategy, which can structure the hierarchical spatial population driven by evolutionary pressure of multiple sub-populations that ensure the diversity of root population to be well maintained. The proposed algorithm is benchmarked against four classical evolutionary algorithms on well-designed test function suites including both classical and composition test functions. Through the rigorous performance analysis that of all these tests highlight the significant performance improvement, and the comparative results show the superiority of the proposed algorithm.
International Nuclear Information System (INIS)
Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao
2014-01-01
Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP. (paper)
Optimized Strategies for Detecting Extrasolar Space Weather
Hallinan, Gregg
2018-06-01
Fully understanding the implications of space weather for the young solar system, as well as the wider population of planet-hosting stars, requires remote sensing of space weather in other stellar systems. Solar coronal mass ejections can be accompanied by bright radio bursts at low frequencies (typically measurement of the magnetic field strength of the planet, informing on whether the atmosphere of the planet can survive the intense magnetic activity of its host star. However, both stellar and planetary radio emission are highly variable and optimal strategies for detection of these emissions requires the capability to monitor 1000s of nearby stellar/planetary systems simultaneously. I will discuss optimized strategies for both ground and space-based experiments to take advantage of the highly variable nature of the radio emissions powered by extrasolar space weather to enable detection of stellar CMEs and planetary magnetospheres.
Optimal Sizing and Control Strategy Design for Heavy Hybrid Electric Truck
Directory of Open Access Journals (Sweden)
Yuan Zou
2012-01-01
Full Text Available Due to the complexity of the hybrid powertrain, the control is highly involved to improve the collaborations of the different components. For the specific powertrain, the components' sizing just gives the possibility to propel the vehicle and the control will realize the function of the propulsion. Definitely the components' sizing also gives the constraints to the control design, which cause a close coupling between the sizing and control strategy design. This paper presents a parametric study focused on sizing of the powertrain components and optimization of the power split between the engine and electric motor for minimizing the fuel consumption. A framework is put forward to accomplish the optimal sizing and control design for a heavy parallel pre-AMT hybrid truck under the natural driving schedule. The iterative plant-controller combined optimization methodology is adopted to optimize the key parameters of the plant and control strategy simultaneously. A scalable powertrain model based on a bilevel optimization framework is built. Dynamic programming is applied to find the optimal control in the inner loop with a prescribed cycle. The parameters are optimized in the outer loop. The results are analysed and the optimal sizing and control strategy are achieved simultaneously.
Optimal Dynamic Advertising Strategy Under Age-Specific Market Segmentation
Krastev, Vladimir
2011-12-01
We consider the model proposed by Faggian and Grosset for determining the advertising efforts and goodwill in the long run of a company under age segmentation of consumers. Reducing this model to optimal control sub problems we find the optimal advertising strategy and goodwill.
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-07
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.
An optimal inspection strategy for randomly failing equipment
International Nuclear Information System (INIS)
Chelbi, Anis; Ait-Kadi, Daoud
1999-01-01
This paper addresses the problem of generating optimal inspection strategies for randomly failing equipment where imminent failure is not obvious and can only be detected through inspection. Inspections are carried out following a condition-based procedure. The equipment is replaced if it has failed or if it shows imminent signs of failure. The latter state is indicated by measuring certain predetermined control parameters during inspection. Costs are associated with inspection, idle time and preventive or corrective actions. An optimal inspection strategy is defined as the inspection sequence minimizing the expected total cost per time unit over an infinite span. A mathematical model and a numerical algorithm are developed to generate an optimal inspection sequence. As a practical example, the model is applied to provide a machine tool operator with a time sequence for inspecting the cutting tool. The tool life time distribution and the trend of one control parameter defining its actual condition are supposed to be known
Directory of Open Access Journals (Sweden)
F. Raicich
2003-01-01
Full Text Available For the first time in the Mediterranean Sea various temperature sampling strategies are studied and compared to each other by means of the Observing System Simulation Experiment technique. Their usefulness in the framework of the Mediterranean Forecasting System (MFS is assessed by quantifying their impact in a Mediterranean General Circulation Model in numerical twin experiments via univariate data assimilation of temperature profiles in summer and winter conditions. Data assimilation is performed by means of the optimal interpolation algorithm implemented in the SOFA (System for Ocean Forecasting and Analysis code. The sampling strategies studied here include various combinations of eXpendable BathyThermograph (XBT profiles collected along Volunteer Observing Ship (VOS tracks, Airborne XBTs (AXBTs and sea surface temperatures. The actual sampling strategy adopted in the MFS Pilot Project during the Targeted Operational Period (TOP, winter-spring 2000 is also studied. The data impact is quantified by the error reduction relative to the free run. The most effective sampling strategies determine 25–40% error reduction, depending on the season, the geographic area and the depth range. A qualitative relationship can be recognized in terms of the spread of information from the data positions, between basin circulation features and spatial patterns of the error reduction fields, as a function of different spatial and seasonal characteristics of the dynamics. The largest error reductions are observed when samplings are characterized by extensive spatial coverages, as in the cases of AXBTs and the combination of XBTs and surface temperatures. The sampling strategy adopted during the TOP is characterized by little impact, as a consequence of a sampling frequency that is too low. Key words. Oceanography: general (marginal and semi-enclosed seas; numerical modelling
Directory of Open Access Journals (Sweden)
F. Raicich
Full Text Available For the first time in the Mediterranean Sea various temperature sampling strategies are studied and compared to each other by means of the Observing System Simulation Experiment technique. Their usefulness in the framework of the Mediterranean Forecasting System (MFS is assessed by quantifying their impact in a Mediterranean General Circulation Model in numerical twin experiments via univariate data assimilation of temperature profiles in summer and winter conditions. Data assimilation is performed by means of the optimal interpolation algorithm implemented in the SOFA (System for Ocean Forecasting and Analysis code. The sampling strategies studied here include various combinations of eXpendable BathyThermograph (XBT profiles collected along Volunteer Observing Ship (VOS tracks, Airborne XBTs (AXBTs and sea surface temperatures. The actual sampling strategy adopted in the MFS Pilot Project during the Targeted Operational Period (TOP, winter-spring 2000 is also studied.
The data impact is quantified by the error reduction relative to the free run. The most effective sampling strategies determine 25–40% error reduction, depending on the season, the geographic area and the depth range. A qualitative relationship can be recognized in terms of the spread of information from the data positions, between basin circulation features and spatial patterns of the error reduction fields, as a function of different spatial and seasonal characteristics of the dynamics. The largest error reductions are observed when samplings are characterized by extensive spatial coverages, as in the cases of AXBTs and the combination of XBTs and surface temperatures. The sampling strategy adopted during the TOP is characterized by little impact, as a consequence of a sampling frequency that is too low.
Key words. Oceanography: general (marginal and semi-enclosed seas; numerical modelling
Optimal combined purchasing strategies for a risk-averse manufacturer under price uncertainty
Directory of Open Access Journals (Sweden)
Qiao Wu
2015-09-01
Full Text Available Purpose: The purpose of our paper is to analyze optimal purchasing strategies when a manufacturer can buy raw materials from a long-term contract supplier and a spot market under spot price uncertainty. Design/methodology/approach: This procurement model can be solved by using dynamic programming. First, we maximize the DM’s utility of the second period, obtaining the optimal contract quantity and spot quantity for the second period. Then, maximize the DM’s utility of both periods, obtaining the optimal purchasing strategy for the first period. We use a numerical method to compare the performance level of a pure spot sourcing strategy with that of a mixed strategy. Findings: Our results show that optimal purchasing strategies vary with the trend of contract prices. If the contract price falls, the total quantity purchased in period 1 will decrease in the degree of risk aversion. If the contract price increases, the total quantity purchased in period 1 will increase in the degree of risk aversion. The relationship between the optimal contract quantity and the degree of risk aversion depends on whether the expected spot price or the contract price is larger in period 2. Finally, we compare the performance levels between a combined strategy and a spot sourcing strategy. It shows that a combined strategy is optimal for a risk-averse buyer. Originality/value: It’s challenging to deal with a two-period procurement problem with risk consideration. We have obtained results of a two-period procurement problem with two sourcing options, namely contract procurement and spot purchases. Our model incorporates the buyer’s risk aversion factor and the change of contract prices, which are not addressed in early studies.
Automatic CT simulation optimization for radiation therapy: A general strategy
Energy Technology Data Exchange (ETDEWEB)
Li, Hua, E-mail: huli@radonc.wustl.edu; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M.; Mutic, Sasa [Department of Radiation Oncology, Washington University, St. Louis, Missouri 63110 (United States); Yu, Lifeng [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States); Anastasio, Mark A. [Department of Biomedical Engineering, Washington University, St. Louis, Missouri 63110 (United States); Low, Daniel A. [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States)
2014-03-15
Purpose: In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. Methods: The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Results: Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube
Automatic CT simulation optimization for radiation therapy: A general strategy.
Li, Hua; Yu, Lifeng; Anastasio, Mark A; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M; Low, Daniel A; Mutic, Sasa
2014-03-01
In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube potentials for patient sizes
Halbheer, Daniel; Stahl, Florian; Koenigsberg, Oded; Lehmann, Donald R
2013-01-01
This paper studies content strategies for online publishers of digital information goods. It examines sampling strategies and compares their performance to paid content and free content strategies. A sampling strategy, where some of the content is offered for free and consumers are charged for access to the rest, is known as a "metered model" in the newspaper industry. We analyze optimal decisions concerning the size of the sample and the price of the paid content when sampling serves the dua...
Parallel strategy for optimal learning in perceptrons
International Nuclear Information System (INIS)
Neirotti, J P
2010-01-01
We developed a parallel strategy for learning optimally specific realizable rules by perceptrons, in an online learning scenario. Our result is a generalization of the Caticha-Kinouchi (CK) algorithm developed for learning a perceptron with a synaptic vector drawn from a uniform distribution over the N-dimensional sphere, so called the typical case. Our method outperforms the CK algorithm in almost all possible situations, failing only in a denumerable set of cases. The algorithm is optimal in the sense that it saturates Bayesian bounds when it succeeds.
Optimal updating magnitude in adaptive flat-distribution sampling.
Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery
2017-11-07
We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.
Blackjack in Holland Casino's : Basic, optimal and winning strategies
van der Genugten, B.B.
1995-01-01
This paper considers the cardgame Blackjack according to the rules of Holland Casino's in the Netherlands. Expected gains of strategies are derived with simulation and also with analytic tools. New effiency concepts based on the gains of the basic and the optimal strategy are introduced. A general
Application of optimal interation strategies to diffusion theory calculations
International Nuclear Information System (INIS)
Jones, R.B.
1978-01-01
The geometric interpretation of optimal (minimum computational time) iteration strategies is applied to one- and two-group, two-dimensional diffusion-theory calculations. The method is a ''spectral/time balance'' technique which weighs the convergence enhancement of the inner iteration procedure with that of the outer iteration loop and the time required to reconstruct the source. The diffusion-theory option of the discrete-ordinates transport code DOT3.5 was altered to incorporate the theoretical inner/outer decision logic. For the two-dimensional configuration considered, the optimal strategies reduced the total number of iterations performed for a given error criterion
Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.
Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong
2014-01-01
Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.
Demerouti, Evangelia; Bakker, Arnold B; Leiter, Michael
2014-01-01
The present study aims to explain why research thus far has found only low to moderate associations between burnout and performance. We argue that employees use adaptive strategies that help them to maintain their performance (i.e., task performance, adaptivity to change) at acceptable levels despite experiencing burnout (i.e., exhaustion, disengagement). We focus on the strategies included in the selective optimization with compensation model. Using a sample of 294 employees and their supervisors, we found that compensation is the most successful strategy in buffering the negative associations of disengagement with supervisor-rated task performance and both disengagement and exhaustion with supervisor-rated adaptivity to change. In contrast, selection exacerbates the negative relationship of exhaustion with supervisor-rated adaptivity to change. In total, 42% of the hypothesized interactions proved to be significant. Our study uncovers successful and unsuccessful strategies that people use to deal with their burnout symptoms in order to achieve satisfactory job performance. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong
2017-06-01
In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.
Optimal relaxed causal sampler using sampled-date system theory
Shekhawat, Hanumant; Meinsma, Gjerrit
This paper studies the design of an optimal relaxed causal sampler using sampled data system theory. A lifted frequency domain approach is used to obtain the existence conditions and the optimal sampler. A state space formulation of the results is also provided. The resulting optimal relaxed causal
Sampling strategies for estimating brook trout effective population size
Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher
2012-01-01
The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...
Optimal Energy Control Strategy Design for a Hybrid Electric Vehicle
Directory of Open Access Journals (Sweden)
Yuan Zou
2013-01-01
Full Text Available A heavy-duty parallel hybrid electric truck is modeled, and its optimal energy control is studied in this paper. The fundamental architecture of the parallel hybrid electric truck is modeled feed-forwardly, together with necessary dynamic features of subsystem or components. Dynamic programming (DP technique is adopted to find the optimal control strategy including the gear-shifting sequence and the power split between the engine and the motor subject to a battery SOC-sustaining constraint. Improved control rules are extracted from the DP-based control solution, forming near-optimal control strategies. Simulation results demonstrate that a significant improvement on the fuel economy can be achieved in the heavy-duty vehicle cycle from the natural driving statistics.
Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.
2012-07-01
In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.
Sample Adaptive Offset Optimization in HEVC
Directory of Open Access Journals (Sweden)
Yang Zhang
2014-11-01
Full Text Available As the next generation of video coding standard, High Efficiency Video Coding (HEVC adopted many useful tools to improve coding efficiency. Sample Adaptive Offset (SAO, is a technique to reduce sample distortion by providing offsets to pixels in in-loop filter. In SAO, pixels in LCU are classified into several categories, then categories and offsets are given based on Rate-Distortion Optimization (RDO of reconstructed pixels in a Largest Coding Unit (LCU. Pixels in a LCU are operated by the same SAO process, however, transform and inverse transform makes the distortion of pixels in Transform Unit (TU edge larger than the distortion inside TU even after deblocking filtering (DF and SAO. And the categories of SAO can also be refined, since it is not proper for many cases. This paper proposed a TU edge offset mode and a category refinement for SAO in HEVC. Experimental results shows that those two kinds of optimization gets -0.13 and -0.2 gain respectively compared with the SAO in HEVC. The proposed algorithm which using the two kinds of optimization gets -0.23 gain on BD-rate compared with the SAO in HEVC which is a 47 % increase with nearly no increase on coding time.
International Nuclear Information System (INIS)
Cox, G.; Beresford, N.A.; Alvarez-Farizo, B.; Oughton, D.; Kis, Z.; Eged, K.; Thorring, H.; Hunt, J.; Wright, S.; Barnett, C.L.; Gil, J.M.; Howard, B.J.; Crout, N.M.J.
2005-01-01
A spatially implemented model designed to assist the identification of optimal countermeasure strategies for radioactively contaminated regions is described. Collective and individual ingestion doses for people within the affected area are estimated together with collective exported ingestion dose. A range of countermeasures are incorporated within the model, and environmental restrictions have been included as appropriate. The model evaluates the effectiveness of a given combination of countermeasures through a cost function which balances the benefit obtained through the reduction in dose with the cost of implementation. The optimal countermeasure strategy is the combination of individual countermeasures (and when and where they are implemented) which gives the lowest value of the cost function. The model outputs should not be considered as definitive solutions, rather as interactive inputs to the decision making process. As a demonstration the model has been applied to a hypothetical scenario in Cumbria (UK). This scenario considered a published nuclear power plant accident scenario with a total deposition of 1.7 x 10 14 , 1.2 x 10 13 , 2.8 x 10 10 and 5.3 x 10 9 Bq for Cs-137, Sr-90, Pu-239/240 and Am-241, respectively. The model predicts that if no remediation measures were implemented the resulting collective dose would be approximately 36 000 person-Sv (predominantly from 137 Cs) over a 10-year period post-deposition. The optimal countermeasure strategy is predicted to avert approximately 33 000 person-Sv at a cost of approximately pound 160 million. The optimal strategy comprises a mixture of ploughing, AFCF (ammonium-ferric hexacyano-ferrate) administration, potassium fertiliser application, clean feeding of livestock and food restrictions. The model recommends specific areas within the contaminated area and time periods where these measures should be implemented
The Optimal Nash Equilibrium Strategies Under Competition
Institute of Scientific and Technical Information of China (English)
孟力; 王崇喜; 汪定伟; 张爱玲
2004-01-01
This paper presented a game theoretic model to study the competition for a single investment oppertunity under uncertainty. It models the hazard rate of investment as a function of competitors' trigger level. Under uncertainty and different information structure, the option and game theory was applied to researching the optimal Nash equilibrium strategies of one or more firm. By means of Matlab software, the paper simulates a real estate developing project example and illustrates how parameter affects investment strategies. The paper's work will contribute to the present investment practice in China.
Weber, Scott; Puskar, Kathryn Rose; Ren, Dianxu
2010-09-01
Stress, developmental changes and social adjustment problems can be significant in rural teens. Screening for psychosocial problems by teachers and other school personnel is infrequent but can be a useful health promotion strategy. We used a cross-sectional survey descriptive design to examine the inter-relationships between depressive symptoms and perceived social support, self-esteem, and optimism in a sample of rural school-based adolescents. Depressive symptoms were negatively correlated with peer social support, family social support, self-esteem, and optimism. Findings underscore the importance for teachers and other school staff to provide health education. Results can be used as the basis for education to improve optimism, self-esteem, social supports and, thus, depression symptoms of teens.
On Optimal, Minimal BRDF Sampling for Reflectance Acquisition
DEFF Research Database (Denmark)
Nielsen, Jannik Boll; Jensen, Henrik Wann; Ramamoorthi, Ravi
2015-01-01
The bidirectional reflectance distribution function (BRDF) is critical for rendering, and accurate material representation requires data-driven reflectance models. However, isotropic BRDFs are 3D functions, and measuring the reflectance of a flat sample can require a million incident and outgoing...... direction pairs, making the use of measured BRDFs impractical. In this paper, we address the problem of reconstructing a measured BRDF from a limited number of samples. We present a novel mapping of the BRDF space, allowing for extraction of descriptive principal components from measured databases......, such as the MERL BRDF database. We optimize for the best sampling directions, and explicitly provide the optimal set of incident and outgoing directions in the Rusinkiewicz parameterization for n = {1, 2, 5, 10, 20} samples. Based on the principal components, we describe a method for accurately reconstructing BRDF...
Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin
2017-08-01
Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously.
Integrated Optimization of Bus Line Fare and Operational Strategies Using Elastic Demand
Directory of Open Access Journals (Sweden)
Chunyan Tang
2017-01-01
Full Text Available An optimization approach for designing a transit service system is proposed. Its objective would be the maximization of total social welfare, by providing a profitable fare structure and tailoring operational strategies to passenger demand. These operational strategies include full route operation (FRO, limited stop, short turn, and a mix of the latter two strategies. The demand function is formulated to reflect the attributes of these strategies, in-vehicle crowding, and fare effects on demand variation. The fare is either a flat fare or a differential fare structure; the latter is based on trip distance and achieved service levels. This proposed methodology is applied to a case study of Dalian, China. The optimal results indicate that an optimal combination of operational strategies integrated with a differential fare structure results in the highest potential for increasing total social welfare, if the value of parameter ε related to additional service fee is low. When this value increases up to more than a threshold, strategies with a flat fare show greater benefits. If this value increases beyond yet another threshold, the use of skipped stop strategies is not recommended.
Optimal strategy analysis based on robust predictive control for inventory system with random demand
Saputra, Aditya; Widowati, Sutrisno
2017-12-01
In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.
Integrated Emission Management strategy for cost-optimal engine-aftertreatment operation
Cloudt, R.P.M.; Willems, F.P.T.
2011-01-01
A new cost-based control strategy is presented that optimizes engine-aftertreatment performance under all operating conditions. This Integrated Emission Management strategy minimizes fuel consumption within the set emission limits by on-line adjustment of air management based on the actual state of
On the robust optimization to the uncertain vaccination strategy problem
International Nuclear Information System (INIS)
Chaerani, D.; Anggriani, N.; Firdaniza
2014-01-01
In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccination strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented
On the robust optimization to the uncertain vaccination strategy problem
Energy Technology Data Exchange (ETDEWEB)
Chaerani, D., E-mail: d.chaerani@unpad.ac.id; Anggriani, N., E-mail: d.chaerani@unpad.ac.id; Firdaniza, E-mail: d.chaerani@unpad.ac.id [Department of Mathematics, Faculty of Mathematics and Natural Sciences, University of Padjadjaran Indonesia, Jalan Raya Bandung Sumedang KM 21 Jatinangor Sumedang 45363 (Indonesia)
2014-02-21
In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccination strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented.
Rate-distortion optimization for compressive video sampling
Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee
2014-05-01
The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.
Cost-effectiveness analysis of optimal strategy for tumor treatment
International Nuclear Information System (INIS)
Pang, Liuyong; Zhao, Zhong; Song, Xinyu
2016-01-01
We propose and analyze an antitumor model with combined immunotherapy and chemotherapy. Firstly, we explore the treatment effects of single immunotherapy and single chemotherapy, respectively. Results indicate that neither immunotherapy nor chemotherapy alone are adequate to cure a tumor. Hence, we apply optimal theory to investigate how the combination of immunotherapy and chemotherapy should be implemented, for a certain time period, in order to reduce the number of tumor cells, while minimizing the implementation cost of the treatment strategy. Secondly, we establish the existence of the optimality system and use Pontryagin’s Maximum Principle to characterize the optimal levels of the two treatment measures. Furthermore, we calculate the incremental cost-effectiveness ratios to analyze the cost-effectiveness of all possible combinations of the two treatment measures. Finally, numerical results show that the combination of immunotherapy and chemotherapy is the most cost-effective strategy for tumor treatment, and able to eliminate the entire tumor with size 4.470 × 10"8 in a year.
Integrated testing strategies can be optimal for chemical risk classification.
Raseta, Marko; Pitchford, Jon; Cussens, James; Doe, John
2017-08-01
There is an urgent need to refine strategies for testing the safety of chemical compounds. This need arises both from the financial and ethical costs of animal tests, but also from the opportunities presented by new in-vitro and in-silico alternatives. Here we explore the mathematical theory underpinning the formulation of optimal testing strategies in toxicology. We show how the costs and imprecisions of the various tests, and the variability in exposures and responses of individuals, can be assembled rationally to form a Markov Decision Problem. We compute the corresponding optimal policies using well developed theory based on Dynamic Programming, thereby identifying and overcoming some methodological and logical inconsistencies which may exist in the current toxicological testing. By illustrating our methods for two simple but readily generalisable examples we show how so-called integrated testing strategies, where information of different precisions from different sources is combined and where different initial test outcomes lead to different sets of future tests, can arise naturally as optimal policies. Copyright © 2017 Elsevier Inc. All rights reserved.
Optimal Strategy Analysis of a Competing Portfolio Market with a Polyvariant Profit Function
International Nuclear Information System (INIS)
Bogolubov, Nikolai N. Jr.; Kyshakevych, Bohdan Yu.; Blackmore, Denis; Prykarpatsky, Anatoliy K.
2010-12-01
A competing market model with a polyvariant profit function that assumes 'zeitnot' stock behavior of clients is formulated within the banking portfolio medium and then analyzed from the perspective of devising optimal strategies. An associated Markov process method for finding an optimal choice strategy for monovariant and bivariant profit functions is developed. Under certain conditions on the bank 'promotional' parameter with respect to the 'fee' for a missed share package transaction and at an asymptotically large enough portfolio volume, universal transcendental equations - determining the optimal share package choice among competing strategies with monovariant and bivariant profit functions - are obtained. (author)
Testing of Strategies for the Acceleration of the Cost Optimization
Energy Technology Data Exchange (ETDEWEB)
Ponciroli, Roberto [Argonne National Lab. (ANL), Argonne, IL (United States); Vilim, Richard B. [Argonne National Lab. (ANL), Argonne, IL (United States)
2017-08-31
The general problem addressed in the Nuclear-Renewable Hybrid Energy System (N-R HES) project is finding the optimum economical dispatch (ED) and capacity planning solutions for the hybrid energy systems. In the present test-problem configuration, the N-R HES unit is composed of three electrical power-generating components, i.e. the Balance of Plant (BOP), the Secondary Energy Source (SES), and the Energy Storage (ES). In addition, there is an Industrial Process (IP), which is devoted to hydrogen generation. At this preliminary stage, the goal is to find the power outputs of each one of the N-R HES unit components (BOP, SES, ES) and the IP hydrogen production level that maximizes the unit profit by simultaneously satisfying individual component operational constraints. The optimization problem is meant to be solved in the Risk Analysis Virtual Environment (RAVEN) framework. The dynamic response of the N-R HES unit components is simulated by using dedicated object-oriented models written in the Modelica modeling language. Though this code coupling provides for very accurate predictions, the ensuing optimization problem is characterized by a very large number of solution variables. To ease the computational burden and to improve the path to a converged solution, a method to better estimate the initial guess for the optimization problem solution was developed. The proposed approach led to the definition of a suitable Monte Carlo-based optimization algorithm (called the preconditioner), which provides an initial guess for the optimal N-R HES power dispatch and the optimal installed capacity for each one of the unit components. The preconditioner samples a set of stochastic power scenarios for each one of the N-R HES unit components, and then for each of them the corresponding value of a suitably defined cost function is evaluated. After having simulated a sufficient number of power histories, the configuration which ensures the highest profit is selected as the optimal
Transitions in optimal adaptive strategies for populations in fluctuating environments
Mayer, Andreas; Mora, Thierry; Rivoire, Olivier; Walczak, Aleksandra M.
2017-09-01
Biological populations are subject to fluctuating environmental conditions. Different adaptive strategies can allow them to cope with these fluctuations: specialization to one particular environmental condition, adoption of a generalist phenotype that compromises between conditions, or population-wise diversification (bet hedging). Which strategy provides the largest selective advantage in the long run depends on the range of accessible phenotypes and the statistics of the environmental fluctuations. Here, we analyze this problem in a simple mathematical model of population growth. First, we review and extend a graphical method to identify the nature of the optimal strategy when the environmental fluctuations are uncorrelated. Temporal correlations in environmental fluctuations open up new strategies that rely on memory but are mathematically challenging to study: We present analytical results to address this challenge. We illustrate our general approach by analyzing optimal adaptive strategies in the presence of trade-offs that constrain the range of accessible phenotypes. Our results extend several previous studies and have applications to a variety of biological phenomena, from antibiotic resistance in bacteria to immune responses in vertebrates.
DEFF Research Database (Denmark)
Clausen, Jens; Zilinskas, A,
2002-01-01
We consider the problem of optimizing a Lipshitzian function. The branch and bound technique is a well-known solution method, and the key components for this are the subdivision scheme, the bound calculation scheme, and the initialization. For Lipschitzian optimization, the bound calculations are...
An Equivalent Emission Minimization Strategy for Causal Optimal Control of Diesel Engines
Directory of Open Access Journals (Sweden)
Stephan Zentner
2014-02-01
Full Text Available One of the main challenges during the development of operating strategies for modern diesel engines is the reduction of the CO2 emissions, while complying with ever more stringent limits for the pollutant emissions. The inherent trade-off between the emissions of CO2 and pollutants renders a simultaneous reduction difficult. Therefore, an optimal operating strategy is sought that yields minimal CO2 emissions, while holding the cumulative pollutant emissions at the allowed level. Such an operating strategy can be obtained offline by solving a constrained optimal control problem. However, the final-value constraint on the cumulated pollutant emissions prevents this approach from being adopted for causal control. This paper proposes a framework for causal optimal control of diesel engines. The optimization problem can be solved online when the constrained minimization of the CO2 emissions is reformulated as an unconstrained minimization of the CO2 emissions and the weighted pollutant emissions (i.e., equivalent emissions. However, the weighting factors are not known a priori. A method for the online calculation of these weighting factors is proposed. It is based on the Hamilton–Jacobi–Bellman (HJB equation and a physically motivated approximation of the optimal cost-to-go. A case study shows that the causal control strategy defined by the online calculation of the equivalence factor and the minimization of the equivalent emissions is only slightly inferior to the non-causal offline optimization, while being applicable to online control.
Optimal Investment-Consumption Strategy under Inflation in a Markovian Regime-Switching Market
Directory of Open Access Journals (Sweden)
Huiling Wu
2016-01-01
Full Text Available This paper studies an investment-consumption problem under inflation. The consumption price level, the prices of the available assets, and the coefficient of the power utility are assumed to be sensitive to the states of underlying economy modulated by a continuous-time Markovian chain. The definition of admissible strategies and the verification theory corresponding to this stochastic control problem are presented. The analytical expression of the optimal investment strategy is derived. The existence, boundedness, and feasibility of the optimal consumption are proven. Finally, we analyze in detail by mathematical and numerical analysis how the risk aversion, the correlation coefficient between the inflation and the stock price, the inflation parameters, and the coefficient of utility affect the optimal investment and consumption strategy.
Evolution strategies and multi-objective optimization of permanent magnet motor
DEFF Research Database (Denmark)
Andersen, Søren Bøgh; Santos, Ilmar
2012-01-01
When designing a permanent magnet motor, several geometry and material parameters are to be defined. This is not an easy task, as material properties and magnetic fields are highly non-linear and the design of a motor is therefore often an iterative process. From an engineering point of view, we...... of evolution strategies, ES to effectively design and optimize parameters of permanent magnet motors. Single as well as multi-objective optimization procedures are carried out. A modified way of creating the strategy parameters for the ES algorithm is also proposed and has together with the standard ES...
PEMFC Optimization Strategy with Auxiliary Power Source in Fuel Cell Hybrid Vehicle
Directory of Open Access Journals (Sweden)
Tinton Dwi Atmaja
2012-02-01
Full Text Available Page HeaderOpen Journal SystemsJournal HelpUser You are logged in as...aulia My Journals My Profile Log Out Log Out as UserNotifications View (27 new ManageJournal Content SearchBrowse By Issue By Author By Title Other JournalsFont SizeMake font size smaller Make font size default Make font size largerInformation For Readers For Authors For LibrariansKeywords CBPNN Displacement FLC LQG/LTR Mixed PMA Ventilation bottom shear stress direct multiple shooting effective fuzzy logic geoelectrical method hourly irregular wave missile trajectory panoramic image predator-prey systems seawater intrusion segmentation structure development pattern terminal bunt manoeuvre Home About User Home Search Current Archives ##Editorial Board##Home > Vol 23, No 1 (2012 > AtmajaPEMFC Optimization Strategy with Auxiliary Power Source in Fuel Cell Hybrid VehicleTinton Dwi Atmaja, Amin AminAbstractone of the present-day implementation of fuel cell is acting as main power source in Fuel Cell Hybrid Vehicle (FCHV. This paper proposes some strategies to optimize the performance of Polymer Electrolyte Membrane Fuel Cell (PEMFC implanted with auxiliary power source to construct a proper FCHV hybridization. The strategies consist of the most updated optimization method determined from three point of view i.e. Energy Storage System (ESS, hybridization topology and control system analysis. The goal of these strategies is to achieve an optimum hybridization with long lifetime, low cost, high efficiency, and hydrogen consumption rate improvement. The energy storage system strategy considers battery, supercapacitor, and high-speed flywheel as the most promising alternative auxiliary power source. The hybridization topology strategy analyzes the using of multiple storage devices injected with electronic components to bear a higher fuel economy and cost saving. The control system strategy employs nonlinear control system to optimize the ripple factor of the voltage and the current
Using remotely-sensed data for optimal field sampling
CSIR Research Space (South Africa)
Debba, Pravesh
2008-09-01
Full Text Available M B E R 2 0 0 8 15 USING REMOTELY- SENSED DATA FOR OPTIMAL FIELD SAMPLING BY DR PRAVESH DEBBA STATISTICS IS THE SCIENCE pertaining to the collection, summary, analysis, interpretation and presentation of data. It is often impractical... studies are: where to sample, what to sample and how many samples to obtain. Conventional sampling techniques are not always suitable in environmental studies and scientists have explored the use of remotely-sensed data as ancillary information to aid...
Modelling and Optimal Control of Typhoid Fever Disease with Cost-Effective Strategies.
Tilahun, Getachew Teshome; Makinde, Oluwole Daniel; Malonza, David
2017-01-01
We propose and analyze a compartmental nonlinear deterministic mathematical model for the typhoid fever outbreak and optimal control strategies in a community with varying population. The model is studied qualitatively using stability theory of differential equations and the basic reproductive number that represents the epidemic indicator is obtained from the largest eigenvalue of the next-generation matrix. Both local and global asymptotic stability conditions for disease-free and endemic equilibria are determined. The model exhibits a forward transcritical bifurcation and the sensitivity analysis is performed. The optimal control problem is designed by applying Pontryagin maximum principle with three control strategies, namely, the prevention strategy through sanitation, proper hygiene, and vaccination; the treatment strategy through application of appropriate medicine; and the screening of the carriers. The cost functional accounts for the cost involved in prevention, screening, and treatment together with the total number of the infected persons averted. Numerical results for the typhoid outbreak dynamics and its optimal control revealed that a combination of prevention and treatment is the best cost-effective strategy to eradicate the disease.
International Nuclear Information System (INIS)
Brum, Daniel M.; Lima, Claudio F.; Robaina, Nicolle F.; Fonseca, Teresa Cristina O.; Cassella, Ricardo J.
2011-01-01
The present paper reports the optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry (GF AAS) employing a strategy based on the injection of the samples as detergent emulsions. The method was optimized in relation to the experimental conditions for the emulsion formation and taking into account that the three analytes (Cu, Fe and Pb) should be measured in the same emulsion. The optimization was performed in a multivariate way by employing a three-variable Doehlert design and a multiple response strategy. For this purpose, the individual responses of the three analytes were combined, yielding a global response that was employed as a dependent variable. The three factors related to the optimization process were: the concentration of HNO 3 , the concentration of the emulsifier agent (Triton X-100 or Triton X-114) in aqueous solution used to emulsify the sample and the volume of solution. At optimum conditions, it was possible to obtain satisfactory results with an emulsion formed by mixing 4 mL of the samples with 1 mL of a 4.7% w/v Triton X-100 solution prepared in 10% v/v HNO 3 medium. The resulting emulsion was stable for 250 min, at least, and provided enough sensitivity to determine the three analytes in the five samples tested. A recovery test was performed to evaluate the accuracy of the optimized procedure and recovery rates, in the range of 88-105%; 94-118% and 95-120%, were verified for Cu, Fe and Pb, respectively.
Energy Technology Data Exchange (ETDEWEB)
Brum, Daniel M.; Lima, Claudio F. [Departamento de Quimica, Universidade Federal de Vicosa, A. Peter Henry Rolfs s/n, Vicosa/MG, 36570-000 (Brazil); Robaina, Nicolle F. [Departamento de Quimica Analitica, Universidade Federal Fluminense, Outeiro de S.J. Batista s/n, Centro, Niteroi/RJ, 24020-141 (Brazil); Fonseca, Teresa Cristina O. [Petrobras, Cenpes/PDEDS/QM, Av. Horacio Macedo 950, Ilha do Fundao, Rio de Janeiro/RJ, 21941-915 (Brazil); Cassella, Ricardo J., E-mail: cassella@vm.uff.br [Departamento de Quimica Analitica, Universidade Federal Fluminense, Outeiro de S.J. Batista s/n, Centro, Niteroi/RJ, 24020-141 (Brazil)
2011-05-15
The present paper reports the optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry (GF AAS) employing a strategy based on the injection of the samples as detergent emulsions. The method was optimized in relation to the experimental conditions for the emulsion formation and taking into account that the three analytes (Cu, Fe and Pb) should be measured in the same emulsion. The optimization was performed in a multivariate way by employing a three-variable Doehlert design and a multiple response strategy. For this purpose, the individual responses of the three analytes were combined, yielding a global response that was employed as a dependent variable. The three factors related to the optimization process were: the concentration of HNO{sub 3}, the concentration of the emulsifier agent (Triton X-100 or Triton X-114) in aqueous solution used to emulsify the sample and the volume of solution. At optimum conditions, it was possible to obtain satisfactory results with an emulsion formed by mixing 4 mL of the samples with 1 mL of a 4.7% w/v Triton X-100 solution prepared in 10% v/v HNO{sub 3} medium. The resulting emulsion was stable for 250 min, at least, and provided enough sensitivity to determine the three analytes in the five samples tested. A recovery test was performed to evaluate the accuracy of the optimized procedure and recovery rates, in the range of 88-105%; 94-118% and 95-120%, were verified for Cu, Fe and Pb, respectively.
Control strategies for wind farm power optimization: LES study
Ciri, Umberto; Rotea, Mario; Leonardi, Stefano
2017-11-01
Turbines in wind farms operate in off-design conditions as wake interactions occur for particular wind directions. Advanced wind farm control strategies aim at coordinating and adjusting turbine operations to mitigate power losses in such conditions. Coordination is achieved by controlling on upstream turbines either the wake intensity, through the blade pitch angle or the generator torque, or the wake direction, through yaw misalignment. Downstream turbines can be adapted to work in waked conditions and limit power losses, using the blade pitch angle or the generator torque. As wind conditions in wind farm operations may change significantly, it is difficult to determine and parameterize the variations of the coordinated optimal settings. An alternative is model-free control and optimization of wind farms, which does not require any parameterization and can track the optimal settings as conditions vary. In this work, we employ a model-free optimization algorithm, extremum-seeking control, to find the optimal set-points of generator torque, blade pitch and yaw angle for a three-turbine configuration. Large-Eddy Simulations are used to provide a virtual environment to evaluate the performance of the control strategies under realistic, unsteady incoming wind. This work was supported by the National Science Foundation, Grants No. 1243482 (the WINDINSPIRE project) and IIP 1362033 (I/UCRC WindSTAR). TACC is acknowledged for providing computational time.
Optimal football strategies: AC Milan versus FC Barcelona
Papahristodoulou, Christos
2012-01-01
In a recent UEFA Champions League game between AC Milan and FC Barcelona, played in Italy (final score 2-3), the collected match statistics, classified into four offensive and two defensive strategies, were in favour of FC Barcelona (by 13 versus 8 points). The aim of this paper is to examine to what extent the optimal game strategies derived from some deterministic, possibilistic, stochastic and fuzzy LP models would improve the payoff of AC Milan at the cost of FC Barcelona.
Research of Ant Colony Optimized Adaptive Control Strategy for Hybrid Electric Vehicle
Directory of Open Access Journals (Sweden)
Linhui Li
2014-01-01
Full Text Available Energy management control strategy of hybrid electric vehicle has a great influence on the vehicle fuel consumption with electric motors adding to the traditional vehicle power system. As vehicle real driving cycles seem to be uncertain, the dynamic driving cycles will have an impact on control strategy’s energy-saving effect. In order to better adapt the dynamic driving cycles, control strategy should have the ability to recognize the real-time driving cycle and adaptively adjust to the corresponding off-line optimal control parameters. In this paper, four types of representative driving cycles are constructed based on the actual vehicle operating data, and a fuzzy driving cycle recognition algorithm is proposed for online recognizing the type of actual driving cycle. Then, based on the equivalent fuel consumption minimization strategy, an ant colony optimization algorithm is utilized to search the optimal control parameters “charge and discharge equivalent factors” for each type of representative driving cycle. At last, the simulation experiments are conducted to verify the accuracy of the proposed fuzzy recognition algorithm and the validity of the designed control strategy optimization method.
Stability Analysis and Optimal Control Strategy for Prevention of Pine Wilt Disease
Directory of Open Access Journals (Sweden)
Kwang Sung Lee
2014-01-01
Full Text Available We propose a mathematical model of pine wilt disease (PWD which is caused by pine sawyer beetles carrying the pinewood nematode (PWN. We calculate the basic reproduction number R0 and investigate the stability of a disease-free and endemic equilibrium in a given mathematical model. We show that the stability of the equilibrium in the proposed model can be controlled through the basic reproduction number R0. We then discuss effective optimal control strategies for the proposed PWD mathematical model. We demonstrate the existence of a control problem, and then we apply both analytical and numerical techniques to demonstrate effective control methods to prevent the transmission of the PWD. In order to do this, we apply two control strategies: tree-injection of nematicide and the eradication of adult beetles through aerial pesticide spraying. Optimal prevention strategies can be determined by solving the corresponding optimality system. Numerical simulations of the optimal control problem using a set of reasonable parameter values suggest that reducing the number of pine sawyer beetles is more effective than the tree-injection strategy for controlling the spread of PWD.
Sturrock, Hugh J W; Gething, Pete W; Ashton, Ruth A; Kolaczinski, Jan H; Kabatereine, Narcis B; Brooker, Simon
2011-09-01
In schistosomiasis control, there is a need to geographically target treatment to populations at high risk of morbidity. This paper evaluates alternative sampling strategies for surveys of Schistosoma mansoni to target mass drug administration in Kenya and Ethiopia. Two main designs are considered: lot quality assurance sampling (LQAS) of children from all schools; and a geostatistical design that samples a subset of schools and uses semi-variogram analysis and spatial interpolation to predict prevalence in the remaining unsurveyed schools. Computerized simulations are used to investigate the performance of sampling strategies in correctly classifying schools according to treatment needs and their cost-effectiveness in identifying high prevalence schools. LQAS performs better than geostatistical sampling in correctly classifying schools, but at a cost with a higher cost per high prevalence school correctly classified. It is suggested that the optimal surveying strategy for S. mansoni needs to take into account the goals of the control programme and the financial and drug resources available.
Designing optimal sampling schemes for field visits
CSIR Research Space (South Africa)
Debba, Pravesh
2008-10-01
Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...
Optimization Under Uncertainty for Wake Steering Strategies
Energy Technology Data Exchange (ETDEWEB)
Quick, Julian [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Annoni, Jennifer [National Renewable Energy Laboratory (NREL), Golden, CO (United States); King, Ryan N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dykes, Katherine L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Fleming, Paul A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ning, Andrew [Brigham Young University
2017-08-03
Offsetting turbines' yaw orientations from incoming wind is a powerful tool that may be leveraged to reduce undesirable wake effects on downstream turbines. First, we examine a simple two-turbine case to gain intuition as to how inflow direction uncertainty affects the optimal solution. The turbines are modeled with unidirectional inflow such that one turbine directly wakes the other, using ten rotor diameter spacing. We perform optimization under uncertainty (OUU) via a parameter sweep of the front turbine. The OUU solution generally prefers less steering. We then do this optimization for a 60-turbine wind farm with unidirectional inflow, varying the degree of inflow uncertainty and approaching this OUU problem by nesting a polynomial chaos expansion uncertainty quantification routine within an outer optimization. We examined how different levels of uncertainty in the inflow direction effect the ratio of the expected values of deterministic and OUU solutions for steering strategies in the large wind farm, assuming the directional uncertainty used to reach said OUU solution (this ratio is defined as the value of the stochastic solution or VSS).
Optimal robust control strategy of a solid oxide fuel cell system
Wu, Xiaojuan; Gao, Danhui
2018-01-01
Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.
Implementation of an optimal control energy management strategy in a hybrid truck
Mullem, D. van; Keulen, T. van; Kessels, J.T.B.A.; Jager, B. de; Steinbuch, M.
2010-01-01
Energy Management Strategies for hybrid powertrains control the power split, between the engine and electric motor, of a hybrid vehicle, with fuel consumption or emission minimization as objective. Optimal control theory can be applied to rewrite the optimization problem to an optimization
An Optimal Operating Strategy for Battery Life Cycle Costs in Electric Vehicles
Directory of Open Access Journals (Sweden)
Yinghua Han
2014-01-01
Full Text Available Impact on petroleum based vehicles on the environment, cost, and availability of fuel has led to an increased interest in electric vehicle as a means of transportation. Battery is a major component in an electric vehicle. Economic viability of these vehicles depends on the availability of cost-effective batteries. This paper presents a generalized formulation for determining the optimal operating strategy and cost optimization for battery. Assume that the deterioration of the battery is stochastic. Under the assumptions, the proposed operating strategy for battery is formulated as a nonlinear optimization problem considering reliability and failure number. And an explicit expression of the average cost rate is derived for battery lifetime. Results show that the proposed operating strategy enhances the availability and reliability at a low cost.
Directory of Open Access Journals (Sweden)
Carmen Fullana-Belda
2013-10-01
Full Text Available Traditional uneven-aged forest management seeks a balance between equilibrium stand structure and economic profitability, which often leads to harvesting strategies concentrated in the larger diameter classes. The sustainability (i.e., population persistence over time and influence of such economically optimal strategies on the equilibrium position of a stand (given by the stable diameter distribution have not been sufficiently investigated in prior forest literature. This article therefore proposes a discrete optimal control model to analyze the sustainability and stability of the economically optimal harvesting strategies of uneven-aged Pinus nigra stands. For this model, we rely on an objective function that integrates financial data of harvesting operations with a projection matrix model that can describe the population dynamics. The model solution reveals the optimal management schedules for a wide variety of scenarios. To measure the distance between the stable diameter distribution and the economically optimal harvesting strategy distribution, the model uses Keyfitz’s delta, which returns high values for all the scenarios and, thus, suggests that those economically optimal harvesting strategies have an unstabilizing influence on the equilibrium positions. Moreover, the economically optimal harvesting strategies were unsustainable for all the scenarios.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-09-21
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.
Distributed Strategy for Optimal Dispatch of Unbalanced Three-Phase Islanded Microgrids
DEFF Research Database (Denmark)
Vergara Barrios, Pedro Pablo; Rey-López, Juan Manuel; Shaker, Hamid Reza
2018-01-01
This paper presents a distributed strategy for the optimal dispatch of islanded microgrids, modeled as unbalanced three-phase electrical distribution systems (EDS). To set the dispatch of the distributed generation (DG) units, an optimal generation problem is stated and solved distributively based......-phase microgrid. According to the obtained results, the proposed strategy achieves a lower cost solution when compared with a centralized approach based on a static droop framework, with a considerable reduction on the communication system complexity. Additionally, it corrects the mismatch between generation...
Optimal Claiming Strategies in Bonus Malus Systems and Implied Markov Chains
Directory of Open Access Journals (Sweden)
Arthur Charpentier
2017-11-01
Full Text Available In this paper, we investigate the impact of the accident reporting strategy of drivers, within a Bonus-Malus system. We exhibit the induced modification of the corresponding class level transition matrix and derive the optimal reporting strategy for rational drivers. The hunger for bonuses induces optimal thresholds under which, drivers do not claim their losses. Mathematical properties of the induced level class process are studied. A convergent numerical algorithm is provided for computing such thresholds and realistic numerical applications are discussed.
Sampling strategies to measure the prevalence of common recurrent infections in longitudinal studies
Directory of Open Access Journals (Sweden)
Luby Stephen P
2010-08-01
Full Text Available Abstract Background Measuring recurrent infections such as diarrhoea or respiratory infections in epidemiological studies is a methodological challenge. Problems in measuring the incidence of recurrent infections include the episode definition, recall error, and the logistics of close follow up. Longitudinal prevalence (LP, the proportion-of-time-ill estimated by repeated prevalence measurements, is an alternative measure to incidence of recurrent infections. In contrast to incidence which usually requires continuous sampling, LP can be measured at intervals. This study explored how many more participants are needed for infrequent sampling to achieve the same study power as frequent sampling. Methods We developed a set of four empirical simulation models representing low and high risk settings with short or long episode durations. The model was used to evaluate different sampling strategies with different assumptions on recall period and recall error. Results The model identified three major factors that influence sampling strategies: (1 the clustering of episodes in individuals; (2 the duration of episodes; (3 the positive correlation between an individual's disease incidence and episode duration. Intermittent sampling (e.g. 12 times per year often requires only a slightly larger sample size compared to continuous sampling, especially in cluster-randomized trials. The collection of period prevalence data can lead to highly biased effect estimates if the exposure variable is associated with episode duration. To maximize study power, recall periods of 3 to 7 days may be preferable over shorter periods, even if this leads to inaccuracy in the prevalence estimates. Conclusion Choosing the optimal approach to measure recurrent infections in epidemiological studies depends on the setting, the study objectives, study design and budget constraints. Sampling at intervals can contribute to making epidemiological studies and trials more efficient, valid
Directory of Open Access Journals (Sweden)
Fei Wang
2017-11-01
Full Text Available The optimal dispatching model for a stand-alone microgrid (MG is of great importance to its operation reliability and economy. This paper aims at addressing the difficulties in improving the operational economy and maintaining the power balance under uncertain load demand and renewable generation, which could be even worse in such abnormal conditions as storms or abnormally low or high temperatures. A new two-time scale multi-objective optimization model, including day-ahead cursory scheduling and real-time scheduling for finer adjustments, is proposed to optimize the operational cost, load shedding compensation and environmental benefit of stand-alone MG through controllable load (CL and multi-distributed generations (DGs. The main novelty of the proposed model is that the synergetic response of CL and energy storage system (ESS in real-time scheduling offset the operation uncertainty quickly. And the improved dispatch strategy for combined cooling-heating-power (CCHP enhanced the system economy while the comfort is guaranteed. An improved algorithm, Search Improvement Process-Chaotic Optimization-Particle Swarm Optimization-Elite Retention Strategy (SIP-CO-PSO-ERS algorithm with strong searching capability and fast convergence speed, was presented to deal with the problem brought by the increased errors between actual renewable generation and load and prior predictions. Four typical scenarios are designed according to the combinations of day types (work day or weekend and weather categories (sunny or rainy to verify the performance of the presented dispatch strategy. The simulation results show that the proposed two-time scale model and SIP-CO-PSO-ERS algorithm exhibit better performance in adaptability, convergence speed and search ability than conventional methods for the stand-alone MG’s operation.
An Optimal Portfolio and Capital Management Strategy for Basel III Compliant Commercial Banks
Directory of Open Access Journals (Sweden)
Grant E. Muller
2014-01-01
Full Text Available We model a Basel III compliant commercial bank that operates in a financial market consisting of a treasury security, a marketable security, and a loan and we regard the interest rate in the market as being stochastic. We find the investment strategy that maximizes an expected utility of the bank’s asset portfolio at a future date. This entails obtaining formulas for the optimal amounts of bank capital invested in different assets. Based on the optimal investment strategy, we derive a model for the Capital Adequacy Ratio (CAR, which the Basel Committee on Banking Supervision (BCBS introduced as a measure against banks’ susceptibility to failure. Furthermore, we consider the optimal investment strategy subject to a constant CAR at the minimum prescribed level. We derive a formula for the bank’s asset portfolio at constant (minimum CAR value and present numerical simulations on different scenarios. Under the optimal investment strategy, the CAR is above the minimum prescribed level. The value of the asset portfolio is improved if the CAR is at its (constant minimum value.
Efficient sampling of complex network with modified random walk strategies
Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei
2018-02-01
We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.
Optimizing Soil Moisture Sampling Locations for Validation Networks for SMAP
Roshani, E.; Berg, A. A.; Lindsay, J.
2013-12-01
Soil Moisture Active Passive satellite (SMAP) is scheduled for launch on Oct 2014. Global efforts are underway for establishment of soil moisture monitoring networks for both the pre- and post-launch validation and calibration of the SMAP products. In 2012 the SMAP Validation Experiment, SMAPVEX12, took place near Carman Manitoba, Canada where nearly 60 fields were sampled continuously over a 6 week period for soil moisture and several other parameters simultaneous to remotely sensed images of the sampling region. The locations of these sampling sites were mainly selected on the basis of accessibility, soil texture, and vegetation cover. Although these criteria are necessary to consider during sampling site selection, they do not guarantee optimal site placement to provide the most efficient representation of the studied area. In this analysis a method for optimization of sampling locations is presented which combines the state-of-art multi-objective optimization engine (non-dominated sorting genetic algorithm, NSGA-II), with the kriging interpolation technique to minimize the number of sampling sites while simultaneously minimizing the differences between the soil moisture map resulted from the kriging interpolation and soil moisture map from radar imaging. The algorithm is implemented in Whitebox Geospatial Analysis Tools, which is a multi-platform open-source GIS. The optimization framework is subject to the following three constraints:. A) sampling sites should be accessible to the crew on the ground, B) the number of sites located in a specific soil texture should be greater than or equal to a minimum value, and finally C) the number of sampling sites with a specific vegetation cover should be greater than or equal to a minimum constraint. The first constraint is implemented into the proposed model to keep the practicality of the approach. The second and third constraints are considered to guarantee that the collected samples from each soil texture categories
A Single-Degree-of-Freedom Energy Optimization Strategy for Power-Split Hybrid Electric Vehicles
Directory of Open Access Journals (Sweden)
Chaoying Xia
2017-07-01
Full Text Available This paper presents a single-degree-of-freedom energy optimization strategy to solve the energy management problem existing in power-split hybrid electric vehicles (HEVs. The proposed strategy is based on a quadratic performance index, which is innovatively designed to simultaneously restrict the fluctuation of battery state of charge (SOC and reduce fuel consumption. An extended quadratic optimal control problem is formulated by approximating the fuel consumption rate as a quadratic polynomial of engine power. The approximated optimal control law is obtained by utilizing the solution properties of the Riccati equation and adjoint equation. It is easy to implement in real-time and the engineering significance is explained in details. In order to validate the effectiveness of the proposed strategy, the forward-facing vehicle simulation model is established based on the ADVISOR software (Version 2002, National Renewable Energy Laboratory, Golden, CO, USA. The simulation results show that there is only a little fuel consumption difference between the proposed strategy and the Pontryagin’s minimum principle (PMP-based global optimal strategy, and the proposed strategy also exhibits good adaptability under different initial battery SOC, cargo mass and road slope conditions.
Directory of Open Access Journals (Sweden)
Mun-Kyeom Kim
2017-09-01
Full Text Available This study introduces a frequency regulation strategy to enable the participation of wind turbines with permanent magnet synchronous generators (PMSGs. The optimal strategy focuses on developing the frequency support capability of PMSGs connected to the power system. Active power control is performed using maximum power point tracking (MPPT and de-loaded control to supply the required power reserve following a disturbance. A kinetic energy (KE reserve control is developed to enhance the frequency regulation capability of wind turbines. The coordination with the de-loaded control prevents instability in the PMSG wind system due to excessive KE discharge. A KE optimization method that maximizes the sum of the KE reserves at wind farms is also adopted to determine the de-loaded power reference for each PMSG wind turbine using the particle swarm optimization (PSO algorithm. To validate the effectiveness of the proposed optimal control and operation strategy, three different case studies are conducted using the PSCAD/EMTDC simulation tool. The results demonstrate that the optimal strategy enhances the frequency support contribution from PMSG wind turbines.
AMORE-HX: a multidimensional optimization of radial enhanced NMR-sampled hydrogen exchange
International Nuclear Information System (INIS)
Gledhill, John M.; Walters, Benjamin T.; Wand, A. Joshua
2009-01-01
The Cartesian sampled three-dimensional HNCO experiment is inherently limited in time resolution and sensitivity for the real time measurement of protein hydrogen exchange. This is largely overcome by use of the radial HNCO experiment that employs the use of optimized sampling angles. The significant practical limitation presented by use of three-dimensional data is the large data storage and processing requirements necessary and is largely overcome by taking advantage of the inherent capabilities of the 2D-FT to process selective frequency space without artifact or limitation. Decomposition of angle spectra into positive and negative ridge components provides increased resolution and allows statistical averaging of intensity and therefore increased precision. Strategies for averaging ridge cross sections within and between angle spectra are developed to allow further statistical approaches for increasing the precision of measured hydrogen occupancy. Intensity artifacts potentially introduced by over-pulsing are effectively eliminated by use of the BEST approach
Determination of optimal samples for robot calibration based on error similarity
Directory of Open Access Journals (Sweden)
Tian Wei
2015-06-01
Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.
Clinical usefulness of limited sampling strategies for estimating AUC of proton pump inhibitors.
Niioka, Takenori
2011-03-01
Cytochrome P450 (CYP) 2C19 (CYP2C19) genotype is regarded as a useful tool to predict area under the blood concentration-time curve (AUC) of proton pump inhibitors (PPIs). In our results, however, CYP2C19 genotypes had no influence on AUC of all PPIs during fluvoxamine treatment. These findings suggest that CYP2C19 genotyping is not always a good indicator for estimating AUC of PPIs. Limited sampling strategies (LSS) were developed to estimate AUC simply and accurately. It is important to minimize the number of blood samples because of patient's acceptance. This article reviewed the usefulness of LSS for estimating AUC of three PPIs (omeprazole: OPZ, lansoprazole: LPZ and rabeprazole: RPZ). The best prediction formulas in each PPI were AUC(OPZ)=9.24 x C(6h)+2638.03, AUC(LPZ)=12.32 x C(6h)+3276.09 and AUC(RPZ)=1.39 x C(3h)+7.17 x C(6h)+344.14, respectively. In order to optimize the sampling strategy of LPZ, we tried to establish LSS for LPZ using a time point within 3 hours through the property of pharmacokinetics of its enantiomers. The best prediction formula using the fewest sampling points (one point) was AUC(racemic LPZ)=6.5 x C(3h) of (R)-LPZ+13.7 x C(3h) of (S)-LPZ-9917.3 x G1-14387.2×G2+7103.6 (G1: homozygous extensive metabolizer is 1 and the other genotypes are 0; G2: heterozygous extensive metabolizer is 1 and the other genotypes are 0). Those strategies, plasma concentration monitoring at one or two time-points, might be more suitable for AUC estimation than reference to CYP2C19 genotypes, particularly in the case of coadministration of CYP mediators.
Online gaming for learning optimal team strategies in real time
Hudas, Gregory; Lewis, F. L.; Vamvoudakis, K. G.
2010-04-01
This paper first presents an overall view for dynamical decision-making in teams, both cooperative and competitive. Strategies for team decision problems, including optimal control, zero-sum 2-player games (H-infinity control) and so on are normally solved for off-line by solving associated matrix equations such as the Riccati equation. However, using that approach, players cannot change their objectives online in real time without calling for a completely new off-line solution for the new strategies. Therefore, in this paper we give a method for learning optimal team strategies online in real time as team dynamical play unfolds. In the linear quadratic regulator case, for instance, the method learns the Riccati equation solution online without ever solving the Riccati equation. This allows for truly dynamical team decisions where objective functions can change in real time and the system dynamics can be time-varying.
An advanced Lithium-ion battery optimal charging strategy based on a coupled thermoelectric model
International Nuclear Information System (INIS)
Liu, Kailong; Li, Kang; Yang, Zhile; Zhang, Cheng; Deng, Jing
2017-01-01
Lithium-ion batteries are widely adopted as the power supplies for electric vehicles. A key but challenging issue is to achieve optimal battery charging, while taking into account of various constraints for safe, efficient and reliable operation. In this paper, a triple-objective function is first formulated for battery charging based on a coupled thermoelectric model. An advanced optimal charging strategy is then proposed to develop the optimal constant-current-constant-voltage (CCCV) charge current profile, which gives the best trade-off among three conflicting but important objectives for battery management. To be specific, a coupled thermoelectric battery model is first presented. Then, a specific triple-objective function consisting of three objectives, namely charging time, energy loss, and temperature rise (both the interior and surface), is proposed. Heuristic methods such as Teaching-learning-based-optimization (TLBO) and particle swarm optimization (PSO) are applied to optimize the triple-objective function, and their optimization performances are compared. The impacts of the weights for different terms in the objective function are then assessed. Experimental results show that the proposed optimal charging strategy is capable of offering desirable effective optimal charging current profiles and a proper trade-off among the conflicting objectives. Further, the proposed optimal charging strategy can be easily extended to other battery types.
Classifier-Guided Sampling for Complex Energy System Optimization
Energy Technology Data Exchange (ETDEWEB)
Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.
Optimal portfolio strategies under a shortfall constraint | Akume ...
African Journals Online (AJOL)
We impose dynamically, a shortfall constraint in terms of Tail Conditional Expectation on the portfolio selection problem in continuous time, in order to obtain optimal strategies. The nancial market is assumed to comprise n risky assets driven by geometric Brownian motion and one risk-free asset. The method of Lagrange ...
Optimal Coordinated Strategy Analysis for the Procurement Logistics of a Steel Group
Directory of Open Access Journals (Sweden)
Lianbo Deng
2014-01-01
Full Text Available This paper focuses on the optimization of an internal coordinated procurement logistics system in a steel group and the decision on the coordinated procurement strategy by minimizing the logistics costs. Considering the coordinated procurement strategy and the procurement logistics costs, the aim of the optimization model was to maximize the degree of quality satisfaction and to minimize the procurement logistics costs. The model was transformed into a single-objective model and solved using a simulated annealing algorithm. In the algorithm, the supplier of each subsidiary was selected according to the evaluation result for independent procurement. Finally, the effect of different parameters on the coordinated procurement strategy was analysed. The results showed that the coordinated strategy can clearly save procurement costs; that the strategy appears to be more cooperative when the quality requirement is not stricter; and that the coordinated costs have a strong effect on the coordinated procurement strategy.
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Long-term damage management strategies for optimizing steam generator performance
International Nuclear Information System (INIS)
Egan, G.R.; Besuner, P.M.; Fox, J.H.; Merrick, E.A.
1991-01-01
Minimizing long-term impact of steam generator operating, maintenance, outage, and replacement costs is the goal of all pressurized water reactor utilities. Recent research results have led to deterministic controls that may be implemented to optimize steam generator performance and to minimize damage accumulation. The real dilemma that utilities encounter is the decision process that needs to be made in the face of uncertain data. Some of these decisions involve the frequency and extent of steam generator eddy current tube inspections; the definition of operating conditions to minimize the rate of corrosion reactions (T (hot) , T (cold) ; and the imposition of strict water quality management guidelines. With finite resources, how can a utility decide which damage management strategy provides the most return for its investment? Aptech Engineering Services, Inc. (APTECH) developed a damage management strategy that starts from a deterministic analysis of a current problem- primary water stress corrosion cracking (PWSCC). The strategy involves a probabilistic treatment that results in long-term performance optimization. By optimization, we refer to minimizing the total cost of operating the steam generator. This total includes the present value costs of operations, maintenance, outages, and replacements. An example of the application of this methodology is presented. (author)
Directory of Open Access Journals (Sweden)
Jingxian Hao
2016-11-01
Full Text Available The rule-based logic threshold control strategy has been frequently used in energy management strategies for hybrid electric vehicles (HEVs owing to its convenience in adjusting parameters, real-time performance, stability, and robustness. However, the logic threshold control parameters cannot usually ensure the best vehicle performance at different driving cycles and conditions. For this reason, the optimization of key parameters is important to improve the fuel economy, dynamic performance, and drivability. In principle, this is a multiparameter nonlinear optimization problem. The logic threshold energy management strategy for an all-wheel-drive HEV is comprehensively analyzed and developed in this study. Seven key parameters to be optimized are extracted. The optimization model of key parameters is proposed from the perspective of fuel economy. The global optimization method, DIRECT algorithm, which has good real-time performance, low computational burden, rapid convergence, is selected to optimize the extracted key parameters globally. The results show that with the optimized parameters, the engine operates more at the high efficiency range resulting into a fuel savings of 7% compared with non-optimized parameters. The proposed method can provide guidance for calibrating the parameters of the vehicle energy management strategy from the perspective of fuel economy.
Optimal strategy for selling on group-buying website
Directory of Open Access Journals (Sweden)
Xuan Jiang
2014-09-01
Full Text Available Purpose: The purpose of this paper is to help business marketers with offline channels to make decisions on whether to sell through Group-buying (GB websites and how to set online price with the coordination of maximum deal size on GB websites. Design/methodology/approach: Considering the deal structure of GB websites especially for the service fee and minimum deal size limit required by GB websites, advertising effect of selling on GB websites, and interaction between online and offline markets, an analytical model is built to derive optimal online price and maximum deal size for sellers selling through GB website. This paper aims to answer four research questions: (1 How to make a decision on maximum deal size with coordination of the deal price? (2 Will selling on GB websites always be better than staying with offline channel only? (3 What kind of products is more appropriate to sell on GB website? (4How could GB website operator induce sellers to offer deep discount in GB deals? Findings and Originality/value: This paper obtains optimal strategies for sellers selling on GB website and finds that: Even if a seller has sufficient capacity, he/she may still set a maximum deal size on the GB deal to take advantage of Advertisement with Limited Availability (ALA effect; Selling through GB website may not bring a higher profit than selling only through offline channel when a GB site only has a small consumer base and/or if there is a big overlap between the online and offline markets; Low margin products are more suitable for being sold online with ALA strategies (LP-ALA or HP-ALA than high margin ones; A GB site operator could set a small minimum deal size to induce deep discounts from the sellers selling through GB deals. Research limitations/implications: The present study assumed that the demand function is determinate and linear. It will be interesting to study how stochastic demand and a more general demand function affect the optimal
Tank waste remediation system optimized processing strategy with an altered treatment scheme
International Nuclear Information System (INIS)
Slaathaug, E.J.
1996-03-01
This report provides an alternative strategy evolved from the current Hanford Site Tank Waste Remediation System (TWRS) programmatic baseline for accomplishing the treatment and disposal of the Hanford Site tank wastes. This optimized processing strategy with an altered treatment scheme performs the major elements of the TWRS Program, but modifies the deployment of selected treatment technologies to reduce the program cost. The present program for development of waste retrieval, pretreatment, and vitrification technologies continues, but the optimized processing strategy reuses a single facility to accomplish the separations/low-activity waste (LAW) vitrification and the high-level waste (HLW) vitrification processes sequentially, thereby eliminating the need for a separate HLW vitrification facility
Optimal Portfolio Strategy under Rolling Economic Maximum Drawdown Constraints
Directory of Open Access Journals (Sweden)
Xiaojian Yu
2014-01-01
Full Text Available This paper deals with the problem of optimal portfolio strategy under the constraints of rolling economic maximum drawdown. A more practical strategy is developed by using rolling Sharpe ratio in computing the allocation proportion in contrast to existing models. Besides, another novel strategy named “REDP strategy” is further proposed, which replaces the rolling economic drawdown of the portfolio with the rolling economic drawdown of the risky asset. The simulation tests prove that REDP strategy can ensure the portfolio to satisfy the drawdown constraint and outperforms other strategies significantly. An empirical comparison research on the performances of different strategies is carried out by using the 23-year monthly data of SPTR, DJUBS, and 3-month T-bill. The investment cases of single risky asset and two risky assets are both studied in this paper. Empirical results indicate that the REDP strategy successfully controls the maximum drawdown within the given limit and performs best in both return and risk.
Gallagher, Matthew W; Lopez, Shane J; Pressman, Sarah D
2013-10-01
Current theories of optimism suggest that the tendency to maintain positive expectations for the future is an adaptive psychological resource associated with improved well-being and physical health, but the majority of previous optimism research has been conducted in industrialized nations. The present study examined (a) whether optimism is universal, (b) what demographic factors predict optimism, and (c) whether optimism is consistently associated with improved subjective well-being and perceived health worldwide. The present study used representative samples of 142 countries that together represent 95% of the world's population. The total sample of 150,048 individuals had a mean age of 38.28 (SD = 16.85) and approximately equal sex distribution (51.2% female). The relationships between optimism, subjective well-being, and perceived health were examined using hierarchical linear modeling. Results indicated that most individuals and most countries worldwide are optimistic and that higher levels of optimism are associated with improved subjective well-being and perceived health worldwide. The present study provides compelling evidence that optimism is a universal phenomenon and that the associations between optimism and improved psychological functioning are not limited to industrialized nations. © 2012 Wiley Periodicals, Inc.
Using Linked Survey Paradata to Improve Sampling Strategies in the Medical Expenditure Panel Survey
Directory of Open Access Journals (Sweden)
Mirel Lisa B.
2017-06-01
Full Text Available Using paradata from a prior survey that is linked to a new survey can help a survey organization develop more effective sampling strategies. One example of this type of linkage or subsampling is between the National Health Interview Survey (NHIS and the Medical Expenditure Panel Survey (MEPS. MEPS is a nationally representative sample of the U.S. civilian, noninstitutionalized population based on a complex multi-stage sample design. Each year a new sample is drawn as a subsample of households from the prior year’s NHIS. The main objective of this article is to examine how paradata from a prior survey can be used in developing a sampling scheme in a subsequent survey. A framework for optimal allocation of the sample in substrata formed for this purpose is presented and evaluated for the relative effectiveness of alternative substratification schemes. The framework is applied, using real MEPS data, to illustrate how utilizing paradata from the linked survey offers the possibility of making improvements to the sampling scheme for the subsequent survey. The improvements aim to reduce the data collection costs while maintaining or increasing effective responding sample sizes and response rates for a harder to reach population.
A new inertia weight control strategy for particle swarm optimization
Zhu, Xianming; Wang, Hongbo
2018-04-01
Particle Swarm Optimization is a member of swarm intelligence algorithms, which is inspired by the behavior of bird flocks. The inertia weight, one of the most important parameters of PSO, is crucial for PSO, for it balances the performance of exploration and exploitation of the algorithm. This paper proposes a new inertia weight control strategy and PSO with this new strategy is tested by four benchmark functions. The results shows that the new strategy provides the PSO with better performance.
A Bayesian sampling strategy for hazardous waste site characterization
International Nuclear Information System (INIS)
Skalski, J.R.
1987-12-01
Prior knowledge based on historical records or physical evidence often suggests the existence of a hazardous waste site. Initial surveys may provide additional or even conflicting evidence of site contamination. This article presents a Bayes sampling strategy that allocates sampling at a site using this prior knowledge. This sampling strategy minimizes the environmental risks of missing chemical or radionuclide hot spots at a waste site. The environmental risk is shown to be proportional to the size of the undetected hot spot or inversely proportional to the probability of hot spot detection. 12 refs., 2 figs
Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization
Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.
2017-01-01
The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...
Establishment of an immortalized mouse dermal papilla cell strain with optimized culture strategy
Directory of Open Access Journals (Sweden)
Haiying Guo
2018-01-01
Full Text Available Dermal papilla (DP plays important roles in hair follicle regeneration. Long-term culture of mouse DP cells can provide enough cells for research and application of DP cells. We optimized the culture strategy for DP cells from three dimensions: stepwise dissection, collagen I coating, and optimized culture medium. Based on the optimized culture strategy, we immortalized primary DP cells with SV40 large T antigen, and established several immortalized DP cell strains. By comparing molecular expression and morphologic characteristics with primary DP cells, we found one cell strain named iDP6 was similar with primary DP cells. Further identifications illustrate that iDP6 expresses FGF7 and α-SMA, and has activity of alkaline phosphatase. During the process of characterization of immortalized DP cell strains, we also found that cells in DP were heterogeneous. We successfully optimized culture strategy for DP cells, and established an immortalized DP cell strain suitable for research and application of DP cells.
GMOtrack: generator of cost-effective GMO testing strategies.
Novak, Petra Krau; Gruden, Kristina; Morisset, Dany; Lavrac, Nada; Stebih, Dejan; Rotter, Ana; Zel, Jana
2009-01-01
Commercialization of numerous genetically modified organisms (GMOs) has already been approved worldwide, and several additional GMOs are in the approval process. Many countries have adopted legislation to deal with GMO-related issues such as food safety, environmental concerns, and consumers' right of choice, making GMO traceability a necessity. The growing extent of GMO testing makes it important to study optimal GMO detection and identification strategies. This paper formally defines the problem of routine laboratory-level GMO tracking as a cost optimization problem, thus proposing a shift from "the same strategy for all samples" to "sample-centered GMO testing strategies." An algorithm (GMOtrack) for finding optimal two-phase (screening-identification) testing strategies is proposed. The advantages of cost optimization with increasing GMO presence on the market are demonstrated, showing that optimization approaches to analytic GMO traceability can result in major cost reductions. The optimal testing strategies are laboratory-dependent, as the costs depend on prior probabilities of local GMO presence, which are exemplified on food and feed samples. The proposed GMOtrack approach, publicly available under the terms of the General Public License, can be extended to other domains where complex testing is involved, such as safety and quality assurance in the food supply chain.
Energy Technology Data Exchange (ETDEWEB)
A.Badri; Jadid, S. [Department of Electrical Engineering, Iran University of Science and Technology (Iran); Rashidinejad, M. [Shahid Bahonar University, Kerman (Iran); Moghaddam, M.P. [Tarbiat Modarres University, Tehran (Iran)
2008-06-15
In electricity industry with transmission constraints and limited number of producers, Generation Companies (GenCos) are facing an oligopoly market rather than a perfect competition one. Under oligopoly market environment, each GenCo may increase its own profit through a favorable bidding strategy. This paper investigates the problem of developing optimal bidding strategies of GenCos, considering bilateral contracts and transmission constraints. The problem is modeled with a bi-level optimization algorithm, where in the first level each GenCo maximizes its payoff and in the second level a system dispatch will be accomplished through an OPF problem in which transmission constraints are taken into account. It is assumed that each GenCo has information about initial bidding strategies of other competitors. Impacts of exercising market power due to transmission constraints as well as irrational biddings of the some generators are studied and the interactions of different bidding strategies on participants' corresponding payoffs are presented. Furthermore, a risk management-based method to obtain GenCos' optimal bilateral contracts is proposed and the impacts of these contracts on GenCos' optimal biddings and obtained payoffs are investigated. At the end, IEEE 30-bus test system is used for the case study in order to demonstrate the simulation results and support the effectiveness of the proposed model. (author)
International Nuclear Information System (INIS)
Badri, A.; Jadid, S.; Rashidinejad, M.; Moghaddam, M.P.
2008-01-01
In electricity industry with transmission constraints and limited number of producers, Generation Companies (GenCos) are facing an oligopoly market rather than a perfect competition one. Under oligopoly market environment, each GenCo may increase its own profit through a favorable bidding strategy. This paper investigates the problem of developing optimal bidding strategies of GenCos, considering bilateral contracts and transmission constraints. The problem is modeled with a bi-level optimization algorithm, where in the first level each GenCo maximizes its payoff and in the second level a system dispatch will be accomplished through an OPF problem in which transmission constraints are taken into account. It is assumed that each GenCo has information about initial bidding strategies of other competitors. Impacts of exercising market power due to transmission constraints as well as irrational biddings of the some generators are studied and the interactions of different bidding strategies on participants' corresponding payoffs are presented. Furthermore, a risk management-based method to obtain GenCos' optimal bilateral contracts is proposed and the impacts of these contracts on GenCos' optimal biddings and obtained payoffs are investigated. At the end, IEEE 30-bus test system is used for the case study in order to demonstrate the simulation results and support the effectiveness of the proposed model. (author)
Application of evolution strategy algorithm for optimization of a single-layer sound absorber
Directory of Open Access Journals (Sweden)
Morteza Gholamipoor
2014-12-01
Full Text Available Depending on different design parameters and limitations, optimization of sound absorbers has always been a challenge in the field of acoustic engineering. Various methods of optimization have evolved in the past decades with innovative method of evolution strategy gaining more attention in the recent years. Based on their simplicity and straightforward mathematical representations, single-layer absorbers have been widely used in both engineering and industrial applications and an optimized design for these absorbers has become vital. In the present study, the method of evolution strategy algorithm is used for optimization of a single-layer absorber at both a particular frequency and an arbitrary frequency band. Results of the optimization have been compared against different methods of genetic algorithm and penalty functions which are proved to be favorable in both effectiveness and accuracy. Finally, a single-layer absorber is optimized in a desired range of frequencies that is the main goal of an industrial and engineering optimization process.
Footprints of Optimal Protein Assembly Strategies in the Operonic Structure of Prokaryotes
Directory of Open Access Journals (Sweden)
Jan Ewald
2015-04-01
Full Text Available In this work, we investigate optimality principles behind synthesis strategies for protein complexes using a dynamic optimization approach. We show that the cellular capacity of protein synthesis has a strong influence on optimal synthesis strategies reaching from a simultaneous to a sequential synthesis of the subunits of a protein complex. Sequential synthesis is preferred if protein synthesis is strongly limited, whereas a simultaneous synthesis is optimal in situations with a high protein synthesis capacity. We confirm the predictions of our optimization approach through the analysis of the operonic organization of protein complexes in several hundred prokaryotes. Thereby, we are able to show that cellular protein synthesis capacity is a driving force in the dissolution of operons comprising the subunits of a protein complex. Thus, we also provide a tested hypothesis explaining why the subunits of many prokaryotic protein complexes are distributed across several operons despite the presumably less precise co-regulation.
Wu, Yiman; Li, Liang
2012-12-18
For mass spectrometry (MS)-based metabolomics, it is important to use the same amount of starting materials from each sample to compare the metabolome changes in two or more comparative samples. Unfortunately, for biological samples, the total amount or concentration of metabolites is difficult to determine. In this work, we report a general approach of determining the total concentration of metabolites based on the use of chemical labeling to attach a UV absorbent to the metabolites to be analyzed, followed by rapid step-gradient liquid chromatography (LC) UV detection of the labeled metabolites. It is shown that quantification of the total labeled analytes in a biological sample facilitates the preparation of an appropriate amount of starting materials for MS analysis as well as the optimization of the sample loading amount to a mass spectrometer for achieving optimal detectability. As an example, dansylation chemistry was used to label the amine- and phenol-containing metabolites in human urine samples. LC-UV quantification of the labeled metabolites could be optimally performed at the detection wavelength of 338 nm. A calibration curve established from the analysis of a mixture of 17 labeled amino acid standards was found to have the same slope as that from the analysis of the labeled urinary metabolites, suggesting that the labeled amino acid standard calibration curve could be used to determine the total concentration of the labeled urinary metabolites. A workflow incorporating this LC-UV metabolite quantification strategy was then developed in which all individual urine samples were first labeled with (12)C-dansylation and the concentration of each sample was determined by LC-UV. The volumes of urine samples taken for producing the pooled urine standard were adjusted to ensure an equal amount of labeled urine metabolites from each sample was used for the pooling. The pooled urine standard was then labeled with (13)C-dansylation. Equal amounts of the (12)C
Web malware spread modelling and optimal control strategies
Liu, Wanping; Zhong, Shouming
2017-02-01
The popularity of the Web improves the growth of web threats. Formulating mathematical models for accurate prediction of malicious propagation over networks is of great importance. The aim of this paper is to understand the propagation mechanisms of web malware and the impact of human intervention on the spread of malicious hyperlinks. Considering the characteristics of web malware, a new differential epidemic model which extends the traditional SIR model by adding another delitescent compartment is proposed to address the spreading behavior of malicious links over networks. The spreading threshold of the model system is calculated, and the dynamics of the model is theoretically analyzed. Moreover, the optimal control theory is employed to study malware immunization strategies, aiming to keep the total economic loss of security investment and infection loss as low as possible. The existence and uniqueness of the results concerning the optimality system are confirmed. Finally, numerical simulations show that the spread of malware links can be controlled effectively with proper control strategy of specific parameter choice.
Local Optimization Strategies in Urban Vehicular Mobility.
Directory of Open Access Journals (Sweden)
Pierpaolo Mastroianni
Full Text Available The comprehension of vehicular traffic in urban environments is crucial to achieve a good management of the complex processes arising from people collective motion. Even allowing for the great complexity of human beings, human behavior turns out to be subject to strong constraints--physical, environmental, social, economic--that induce the emergence of common patterns. The observation and understanding of those patterns is key to setup effective strategies to optimize the quality of life in cities while not frustrating the natural need for mobility. In this paper we focus on vehicular mobility with the aim to reveal the underlying patterns and uncover the human strategies determining them. To this end we analyze a large dataset of GPS vehicles tracks collected in the Rome (Italy district during a month. We demonstrate the existence of a local optimization of travel times that vehicle drivers perform while choosing their journey. This finding is mirrored by two additional important facts, i.e., the observation that the average vehicle velocity increases by increasing the travel length and the emergence of a universal scaling law for the distribution of travel times at fixed traveled length. A simple modeling scheme confirms this scenario opening the way to further predictions.
Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens
2009-11-01
In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.
Exploring optimal fertigation strategies for orange production, using soil-crop modelling
Qin, Wei; Heinen, Marius; Assinck, Falentijn B.T.; Oenema, Oene
2016-01-01
Water and nitrogen (N) are two key limiting factors in orange (Citrus sinensis) production. The amount and the timing of water and N application are critical, but optimal strategies have not yet been well established. This study presents an analysis of 47 fertigation strategies examined by a
An optimization strategy for a biokinetic model of inhaled radionuclides
International Nuclear Information System (INIS)
Shyr, L.J.; Griffith, W.C.; Boecker, B.B.
1991-01-01
Models for material disposition and dosimetry involve predictions of the biokinetics of the material among compartments representing organs and tissues in the body. Because of a lack of human data for most toxicants, many of the basic data are derived by modeling the results obtained from studies using laboratory animals. Such a biomathematical model is usually developed by adjusting the model parameters to make the model predictions match the measured retention and excretion data visually. The fitting process can be very time-consuming for a complicated model, and visual model selections may be subjective and easily biased by the scale or the data used. Due to the development of computerized optimization methods, manual fitting could benefit from an automated process. However, for a complicated model, an automated process without an optimization strategy will not be efficient, and may not produce fruitful results. In this paper, procedures for, and implementation of, an optimization strategy for a complicated mathematical model is demonstrated by optimizing a biokinetic model for 144Ce in fused aluminosilicate particles inhaled by beagle dogs. The optimized results using SimuSolv were compared to manual fitting results obtained previously using the model simulation software GASP. Also, statistical criteria provided by SimuSolv, such as likelihood function values, were used to help or verify visual model selections
Optimal intervention strategies for cholera outbreak by education and chlorination
Bakhtiar, Toni
2016-01-01
This paper discusses the control of infectious diseases in the framework of optimal control approach. A case study on cholera control was studied by considering two control strategies, namely education and chlorination. We distinct the former control into one regarding person-to-person behaviour and another one concerning person-to-environment conduct. Model are divided into two interacted populations: human population which follows an SIR model and pathogen population. Pontryagin maximum principle was applied in deriving a set of differential equations which consists of dynamical and adjoin systems as optimality conditions. Then, the fourth order Runge-Kutta method was exploited to numerically solve the equation system. An illustrative example was provided to assess the effectiveness of the control strategies toward a set of control scenarios.
Design of Underwater Robot Lines Based on a Hybrid Automatic Optimization Strategy
Institute of Scientific and Technical Information of China (English)
Wenjing Lyu; Weilin Luo
2014-01-01
In this paper, a hybrid automatic optimization strategy is proposed for the design of underwater robot lines. Isight is introduced as an integration platform. The construction of this platform is based on the user programming and several commercial software including UG6.0, GAMBIT2.4.6 and FLUENT12.0. An intelligent parameter optimization method, the particle swarm optimization, is incorporated into the platform. To verify the strategy proposed, a simulation is conducted on the underwater robot model 5470, which originates from the DTRC SUBOFF project. With the automatic optimization platform, the minimal resistance is taken as the optimization goal;the wet surface area as the constraint condition; the length of the fore-body, maximum body radius and after-body’s minimum radius as the design variables. With the CFD calculation, the RANS equations and the standard turbulence model are used for direct numerical simulation. By analyses of the simulation results, it is concluded that the platform is of high efficiency and feasibility. Through the platform, a variety of schemes for the design of the lines are generated and the optimal solution is achieved. The combination of the intelligent optimization algorithm and the numerical simulation ensures a global optimal solution and improves the efficiency of the searching solutions.
Using remote sensing images to design optimal field sampling schemes
CSIR Research Space (South Africa)
Debba, Pravesh
2008-08-01
Full Text Available sampling schemes case studies Optimized field sampling representing the overall distribution of a particular mineral Deriving optimal exploration target zones CONTINUUM REMOVAL for vegetation [13, 27, 46]. The convex hull transform is a method... of normalizing spectra [16, 41]. The convex hull technique is anal- ogous to fitting a rubber band over a spectrum to form a continuum. Figure 5 shows the concept of the convex hull transform. The differ- ence between the hull and the orig- inal spectrum...
Bonta, Maximilian; Török, Szilvia; Hegedus, Balazs; Döme, Balazs; Limbeck, Andreas
2017-03-01
Laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) is one of the most commonly applied methods for lateral trace element distribution analysis in medical studies. Many improvements of the technique regarding quantification and achievable lateral resolution have been achieved in the last years. Nevertheless, sample preparation is also of major importance and the optimal sample preparation strategy still has not been defined. While conventional histology knows a number of sample pre-treatment strategies, little is known about the effect of these approaches on the lateral distributions of elements and/or their quantities in tissues. The technique of formalin fixation and paraffin embedding (FFPE) has emerged as the gold standard in tissue preparation. However, the potential use for elemental distribution studies is questionable due to a large number of sample preparation steps. In this work, LA-ICP-MS was used to examine the applicability of the FFPE sample preparation approach for elemental distribution studies. Qualitative elemental distributions as well as quantitative concentrations in cryo-cut tissues as well as FFPE samples were compared. Results showed that some metals (especially Na and K) are severely affected by the FFPE process, whereas others (e.g., Mn, Ni) are less influenced. Based on these results, a general recommendation can be given: FFPE samples are completely unsuitable for the analysis of alkaline metals. When analyzing transition metals, FFPE samples can give comparable results to snap-frozen tissues. Graphical abstract Sample preparation strategies for biological tissues are compared with regard to the elemental distributions and average trace element concentrations.
Institute of Scientific and Technical Information of China (English)
Feng Zhao; Chenghui Zhang; Bo Sun
2016-01-01
This paper proposed an initiative optimization operation strategy and multi-objective energy management method for combined cooling heating and power(CCHP) with storage systems.Initially,the initiative optimization operation strategy of CCHP system in the cooling season,the heating season and the transition season was formulated.The energy management of CCHP system was optimized by the multi-objective optimization model with maximum daily energy efficiency,minimum daily carbon emissions and minimum daily operation cost based on the proposed initiative optimization operation strategy.Furthermore,the pareto optimal solution set was solved by using the niche particle swarm multi-objective optimization algorithm.Ultimately,the most satisfactory energy management scheme was obtained by using the technique for order preference by similarity to ideal solution(TOPSIS) method.A case study of CCHP system used in a hospital in the north of China validated the effectiveness of this method.The results showed that the satisfactory energy management scheme of CCHP system was obtained based on this initiative optimization operation strategy and multi-objective energy management method.The CCHP system has achieved better energy efficiency,environmental protection and economic benefits.
Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S
2014-10-01
This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.
A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling
Tong, Cao; Gong, Haili
2018-03-01
This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.
The CEV Model and Its Application in a Study of Optimal Investment Strategy
Directory of Open Access Journals (Sweden)
Aiyin Wang
2014-01-01
Full Text Available The constant elasticity of variance (CEV model is used to describe the price of the risky asset. Maximizing the expected utility relating to the Hamilton-Jacobi-Bellman (HJB equation which describes the optimal investment strategies, we obtain a partial differential equation. Applying the Legendre transform, we transform the equation into a dual problem and obtain an approximation solution and an optimal investment strategies for the exponential utility function.
Directory of Open Access Journals (Sweden)
Rafał Dreżewski
2018-05-01
Full Text Available In this paper, the evolutionary algorithm for the optimization of Forex market trading strategies is proposed. The introduction to issues related to the financial markets and the evolutionary algorithms precedes the main part of the paper, in which the proposed trading system is presented. The system uses the evolutionary algorithm for optimization of a parameterized greedy strategy, which is then used as an investment strategy on the Forex market. In the proposed system, a model of the Forex market was developed, including all elements that are necessary for simulating realistic trading processes. The proposed evolutionary algorithm contains several novel mechanisms that were introduced to optimize the greedy strategy. The most important of the proposed techniques are the mechanisms for maintaining the population diversity, a mechanism for protecting the best individuals in the population, the mechanisms preventing the excessive growth of the population, the mechanisms of the initialization of the population after moving the time window and a mechanism of choosing the best strategies used for trading. The experiments, conducted with the use of real-world Forex market data, were aimed at testing the quality of the results obtained using the proposed algorithm and comparing them with the results obtained by the buy-and-hold strategy. By comparing our results with the results of the buy-and-hold strategy, we attempted to verify the validity of the efficient market hypothesis. The credibility of the hypothesis would have more general implications for many different areas of our lives, including future sustainable development policies.
Optimism, pain coping strategies and pain intensity among women with rheumatoid arthritis
Directory of Open Access Journals (Sweden)
Zuzanna Kwissa-Gajewska
2014-07-01
Full Text Available Objectives: According to the biopsychosocial model of pain, it is a multidimensional phenomenon, which comprises physiological (sensation-related factors, psychological (affective and social (socio-economic status, social support factors. Researchers have mainly focused on phenomena increasing the pain sensation; very few studies have examined psychological factors preventing pain. The aim of the research is to assess chronic pain intensity as determined by level of optimism, and to identify pain coping strategies in women with rheumatoid arthritis (RA. Material and methods : A survey was carried out among 54 women during a 7-day period of hospitalisation. The following questionnaires were used: LOT-R (optimism; Scheier, Carver and Bridges, the Coping Strategies Questionnaire (CSQ; Rosenstiel and Keefe and the 10-point visual-analogue pain scale (VAS. Results: The research findings indicate the significance of optimism in the experience of chronic pain, and in the pain coping strategies. Optimists felt a significantly lower level of pain than pessimists. Patients with positive outcome expectancies (optimists experienced less pain thanks to replacing catastrophizing (negative concentration on pain with an increased activity level. Regardless of personality traits, active coping strategies (e.g. ignoring pain sensations, coping self-statements – appraising pain as a challenge, a belief in one’s ability to manage pain resulted in a decrease in pain, whilst catastrophizing contributed to its intensification. The most common coping strategies included praying and hoping. Employment was an important demographic variable: the unemployed experienced less pain than those who worked. Conclusions : The research results indicate that optimism and pain coping strategies should be taken into account in clinical practice. Particular attention should be given to those who have negative outcome expectations, which in turn determine strong chronic pain
An Optimal Investment Strategy and Multiperiod Deposit Insurance Pricing Model for Commercial Banks
Directory of Open Access Journals (Sweden)
Grant E. Muller
2018-01-01
Full Text Available We employ the method of stochastic optimal control to derive the optimal investment strategy for maximizing an expected exponential utility of a commercial bank’s capital at some future date T>0. In addition, we derive a multiperiod deposit insurance (DI pricing model that incorporates the explicit solution of the optimal control problem and an asset value reset rule comparable to the typical practice of insolvency resolution by insuring agencies. By way of numerical simulations, we study the effects of changes in the DI coverage horizon, the risk associated with the asset portfolio of the bank, and the bank’s initial leverage level (deposit-to-asset ratio on the DI premium while the optimal investment strategy is followed.
Optimal offering and operating strategies for wind-storage systems with linear decision rules
DEFF Research Database (Denmark)
Ding, Huajie; Pinson, Pierre; Hu, Zechun
2016-01-01
The participation of wind farm-energy storage systems (WF-ESS) in electricity markets calls for an integrated view of day-ahead offering strategies and real-time operation policies. Such an integrated strategy is proposed here by co-optimizing offering at the day-ahead stage and operation policy...... to be used at the balancing stage. Linear decision rules are seen as a natural approach to model and optimize the real-time operation policy. These allow enhancing profits from balancing markets based on updated information on prices and wind power generation. Our integrated strategies for WF...
Optimal Dynamic Strategies for Index Tracking and Algorithmic Trading
Ward, Brian
In this thesis we study dynamic strategies for index tracking and algorithmic trading. Tracking problems have become ever more important in Financial Engineering as investors seek to precisely control their portfolio risks and exposures over different time horizons. This thesis analyzes various tracking problems and elucidates the tracking errors and strategies one can employ to minimize those errors and maximize profit. In Chapters 2 and 3, we study the empirical tracking properties of exchange traded funds (ETFs), leveraged ETFs (LETFs), and futures products related to spot gold and the Chicago Board Option Exchange (CBOE) Volatility Index (VIX), respectively. These two markets provide interesting and differing examples for understanding index tracking. We find that static strategies work well in the nonleveraged case for gold, but fail to track well in the corresponding leveraged case. For VIX, tracking via neither ETFs, nor futures\\ portfolios succeeds, even in the nonleveraged case. This motivates the need for dynamic strategies, some of which we construct in these two chapters and further expand on in Chapter 4. There, we analyze a framework for index tracking and risk exposure control through financial derivatives. We derive a tracking condition that restricts our exposure choices and also define a slippage process that characterizes the deviations from the index over longer horizons. The framework is applied to a number of models, for example, Black Scholes model and Heston model for equity index tracking, as well as the Square Root (SQR) model and the Concatenated Square Root (CSQR) model for VIX tracking. By specifying how each of these models fall into our framework, we are able to understand the tracking errors in each of these models. Finally, Chapter 5 analyzes a tracking problem of a different kind that arises in algorithmic trading: schedule following for optimal execution. We formulate and solve a stochastic control problem to obtain the optimal
Optimization of pocket machining strategy in HSM
Msaddek, El Bechir; Bouaziz, Zoubeir; Dessein, Gilles; Baili, Maher
2012-01-01
International audience; Our two major concerns, which should be taken into consideration as soon as we start the selecting the machining parameters, are the minimization of the machining time and the maintaining of the high-speed machining machine in good state. The manufacturing strategy is one of the parameters which practically influences the time of the different geometrical forms manufacturing, as well as the machine itself. In this article, we propose an optimization methodology of the ...
DEFF Research Database (Denmark)
Lund, Henrik; Salgi, Georges; Elmegaard, Brian
2009-01-01
on electricity spot markets by storing energy when electricity prices are low and producing electricity when prices are high. In order to make a profit on such markets, CAES plant operators have to identify proper strategies to decide when to sell and when to buy electricity. This paper describes three...... plants will not be able to achieve such optimal operation, since the fluctuations of spot market prices in the coming hours and days are not known. Consequently, two simple practical strategies have been identified and compared to the results of the optimal strategy. This comparison shows that...... independent computer-based methodologies which may be used for identifying the optimal operation strategy for a given CAES plant, on a given spot market and in a given year. The optimal strategy is identified as the one which provides the best business-economic net earnings for the plant. In practice, CAES...
DEFF Research Database (Denmark)
Lafitte, Daniel; Dussol, Bertrand; Andersen, Søren
2002-01-01
OBJECTIVE: We optimized of the preparation of urinary samples to obtain a comprehensive map of urinary proteins of healthy subjects and then compared this map with the ones obtained with patient samples to show that the pattern was specific of their kidney disease. DESIGN AND METHODS: The urinary...
Optimal allocation of trend following strategies
Grebenkov, Denis S.; Serror, Jeremy
2015-09-01
We consider a portfolio allocation problem for trend following (TF) strategies on multiple correlated assets. Under simplifying assumptions of a Gaussian market and linear TF strategies, we derive analytical formulas for the mean and variance of the portfolio return. We construct then the optimal portfolio that maximizes risk-adjusted return by accounting for inter-asset correlations. The dynamic allocation problem for n assets is shown to be equivalent to the classical static allocation problem for n2 virtual assets that include lead-lag corrections in positions of TF strategies. The respective roles of asset auto-correlations and inter-asset correlations are investigated in depth for the two-asset case and a sector model. In contrast to the principle of diversification suggesting to treat uncorrelated assets, we show that inter-asset correlations allow one to estimate apparent trends more reliably and to adjust the TF positions more efficiently. If properly accounted for, inter-asset correlations are not deteriorative but beneficial for portfolio management that can open new profit opportunities for trend followers. These concepts are illustrated using daily returns of three highly correlated futures markets: the E-mini S&P 500, Euro Stoxx 50 index, and the US 10-year T-note futures.
Turbine Control Strategies for Wind Farm Power Optimization
DEFF Research Database (Denmark)
Mirzaei, Mahmood; Göçmen Bozkurt, Tuhfe; Giebel, Gregor
2015-01-01
In recent decades there has been increasing interest in green energies, of which wind energy is the most important one. In order to improve the competitiveness of the wind power plants, there are ongoing researches to decrease cost per energy unit and increase the efficiency of wind turbines...... and wind farms. One way of achieving these goals is to optimize the power generated by a wind farm. One optimization method is to choose appropriate operating points for the individual wind turbines in the farm. We have made three models of a wind farm based on three difference control strategies...... the generated power by changing the power reference of the individual wind turbines. We use the optimization setup to compare power production of the wind farm models. This paper shows that for the most frequent wind velocities (below and around the rated values), the generated powers of the wind farms...
Stochastic Optimal Wind Power Bidding Strategy in Short-Term Electricity Market
DEFF Research Database (Denmark)
Hu, Weihao; Chen, Zhe; Bak-Jensen, Birgitte
2012-01-01
Due to the fluctuating nature and non-perfect forecast of the wind power, the wind power owners are penalized for the imbalance costs of the regulation, when they trade wind power in the short-term liberalized electricity market. Therefore, in this paper a formulation of an imbalance cost...... minimization problem for trading wind power in the short-term electricity market is described, to help the wind power owners optimize their bidding strategy. Stochastic optimization and a Monte Carlo method are adopted to find the optimal bidding strategy for trading wind power in the short-term electricity...... market in order to deal with the uncertainty of the regulation price, the activated regulation of the power system and the forecasted wind power generation. The Danish short-term electricity market and a wind farm in western Denmark are chosen as study cases due to the high wind power penetration here...
Issues and Strategies in Solving Multidisciplinary Optimization Problems
Patnaik, Surya
2013-01-01
Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. The accumulated multidisciplinary design activity is collected under a testbed entitled COMETBOARDS. Several issues were encountered during the solution of the problems. Four issues and the strategies adapted for their resolution are discussed. This is followed by a discussion on analytical methods that is limited to structural design application. An optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. Optimum solutions obtained were infeasible for aircraft and airbreathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through a set of problems: Design of an engine component, Synthesis of a subsonic aircraft, Operation optimization of a supersonic engine, Design of a wave-rotor-topping device, Profile optimization of a cantilever beam, and Design of a cylindrical shell. This chapter provides a cursory account of the issues. Cited references provide detailed discussion on the topics. Design of a structure can also be generated by traditional method and the stochastic design concept. Merits and limitations of the three methods (traditional method, optimization method and stochastic concept) are illustrated. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the
Hopmann, Ch.; Windeck, C.; Kurth, K.; Behr, M.; Siegbert, R.; Elgeti, S.
2014-05-01
The rheological design of profile extrusion dies is one of the most challenging tasks in die design. As no analytical solution is available, the quality and the development time for a new design highly depend on the empirical knowledge of the die manufacturer. Usually, prior to start production several time-consuming, iterative running-in trials need to be performed to check the profile accuracy and the die geometry is reworked. An alternative are numerical flow simulations. These simulations enable to calculate the melt flow through a die so that the quality of the flow distribution can be analyzed. The objective of a current research project is to improve the automated optimization of profile extrusion dies. Special emphasis is put on choosing a convenient starting geometry and parameterization, which enable for possible deformations. In this work, three commonly used design features are examined with regard to their influence on the optimization results. Based on the results, a strategy is derived to select the most relevant areas of the flow channels for the optimization. For these characteristic areas recommendations are given concerning an efficient parameterization setup that still enables adequate deformations of the flow channel geometry. Exemplarily, this approach is applied to a L-shaped profile with different wall thicknesses. The die is optimized automatically and simulation results are qualitatively compared with experimental results. Furthermore, the strategy is applied to a complex extrusion die of a floor skirting profile to prove the universal adaptability.
Dynamic optimal strategies in transboundary pollution game under learning by doing
Chang, Shuhua; Qin, Weihua; Wang, Xinyu
2018-01-01
In this paper, we present a transboundary pollution game, in which emission permits trading and pollution abatement costs under learning by doing are considered. In this model, the abatement cost mainly depends on the level of pollution abatement and the experience of using pollution abatement technology. We use optimal control theory to investigate the optimal emission paths and the optimal pollution abatement strategies under cooperative and noncooperative games, respectively. Additionally, the effects of parameters on the results have been examined.
Optimization of protein samples for NMR using thermal shift assays
International Nuclear Information System (INIS)
Kozak, Sandra; Lercher, Lukas; Karanth, Megha N.; Meijers, Rob; Carlomagno, Teresa; Boivin, Stephane
2016-01-01
Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor"® provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.
Optimization of protein samples for NMR using thermal shift assays
Energy Technology Data Exchange (ETDEWEB)
Kozak, Sandra [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Lercher, Lukas; Karanth, Megha N. [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Meijers, Rob [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Carlomagno, Teresa, E-mail: teresa.carlomagno@oci.uni-hannover.de [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Boivin, Stephane, E-mail: sboivin77@hotmail.com, E-mail: s.boivin@embl-hamburg.de [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany)
2016-04-15
Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor{sup ®} provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.
Monte Carlo importance sampling optimization for system reliability applications
International Nuclear Information System (INIS)
Campioni, Luca; Vestrucci, Paolo
2004-01-01
This paper focuses on the reliability analysis of multicomponent systems by the importance sampling technique, and, in particular, it tackles the optimization aspect. A methodology based on the minimization of the variance at the component level is proposed for the class of systems consisting of independent components. The claim is that, by means of such a methodology, the optimal biasing could be achieved without resorting to the typical approach by trials
Directory of Open Access Journals (Sweden)
Takehisa Yamamoto
Full Text Available Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5-97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm.
Energy evaluation of optimal control strategies for central VWV chiller systems
International Nuclear Information System (INIS)
Jin Xinqiao; Du Zhimin; Xiao Xiaokun
2007-01-01
Under various conditions, the actual load of the heating, ventilation and air conditioning (HVAC) systems is less than it is originally designed in most operation periods. To save energy and to optimize the controls for chilling systems, the performance of variable water volume (VWV) systems and characteristics of control systems are analyzed, and three strategies are presented and tested based on simulation in this paper. Energy evaluation for the three strategies shows that they can save energy to some extent, and there is potential remained. To minimize the energy consumption of chilling system, the setpoints of controls of supply chilled water temperature and supply head of secondary pump should be optimized simultaneously
Directory of Open Access Journals (Sweden)
R Drew Carleton
Full Text Available Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with "pre-sampling" data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n ∼ 100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand was the most efficient, with sample means converging on true mean density for sample sizes of n ∼ 25-40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods.
Collins, Linda M
2018-01-01
This book presents a framework for development, optimization, and evaluation of behavioral, biobehavioral, and biomedical interventions. Behavioral, biobehavioral, and biomedical interventions are programs with the objective of improving and maintaining human health and well-being, broadly defined, in individuals, families, schools, organizations, or communities. These interventions may be aimed at, for example, preventing or treating disease, promoting physical and mental health, preventing violence, or improving academic achievement. This volume introduces the Multiphase Optimization Strategy (MOST), pioneered at The Methodology Center at the Pennsylvania State University, as an alternative to the classical approach of relying solely on the randomized controlled trial (RCT). MOST borrows heavily from perspectives taken and approaches used in engineering, and also integrates concepts from statistics and behavioral science, including the RCT. As described in detail in this book, MOST consists of ...
Directory of Open Access Journals (Sweden)
Shigang Zhang
2015-10-01
Full Text Available Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics.
Zhang, Shigang; Song, Lijun; Zhang, Wei; Hu, Zheng; Yang, Yongmin
2015-01-01
Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics. PMID:26457709
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses
Application of optimal control strategies to HIV-malaria co-infection dynamics
Fatmawati; Windarto; Hanif, Lathifah
2018-03-01
This paper presents a mathematical model of HIV and malaria co-infection transmission dynamics. Optimal control strategies such as malaria preventive, anti-malaria and antiretroviral (ARV) treatments are considered into the model to reduce the co-infection. First, we studied the existence and stability of equilibria of the presented model without control variables. The model has four equilibria, namely the disease-free equilibrium, the HIV endemic equilibrium, the malaria endemic equilibrium, and the co-infection equilibrium. We also obtain two basic reproduction ratios corresponding to the diseases. It was found that the disease-free equilibrium is locally asymptotically stable whenever their respective basic reproduction numbers are less than one. We also conducted a sensitivity analysis to determine the dominant factor controlling the transmission. sic reproduction numbers are less than one. We also conducted a sensitivity analysis to determine the dominant factor controlling the transmission. Then, the optimal control theory for the model was derived analytically by using Pontryagin Maximum Principle. Numerical simulations of the optimal control strategies are also performed to illustrate the results. From the numerical results, we conclude that the best strategy is to combine the malaria prevention and ARV treatments in order to reduce malaria and HIV co-infection populations.
Improved sample size determination for attributes and variables sampling
International Nuclear Information System (INIS)
Stirpe, D.; Picard, R.R.
1985-01-01
Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs
A characteristic study of CCF modeling techniques and optimization of CCF defense strategies
International Nuclear Information System (INIS)
Kim, Min Chull
2000-02-01
Common Cause Failures (CCFs ) are among the major contributors to risk and core damage frequency (CDF ) from operating nuclear power plants (NPPs ). Our study on CCF focused on the following aspects : 1) a characteristic study on the CCF modeling techniques and 2) development of the optimal CCF defense strategy. Firstly, the characteristics of CCF modeling techniques were studied through sensitivity study of CCF occurrence probability upon system redundancy. The modeling techniques considered in this study include those most widely used worldwide, i.e., beta factor, MGL, alpha factor, and binomial failure rate models. We found that MGL and alpha factor models are essentially identical in terms of the CCF probability. Secondly, in the study for CCF defense, the various methods identified in the previous studies for defending against CCF were classified into five different categories. Based on these categories, we developed a generic method by which the optimal CCF defense strategy can be selected. The method is not only qualitative but also quantitative in nature: the selection of the optimal strategy among candidates is based on the use of analytic hierarchical process (AHP). We applied this method to two motor-driven valves for containment sump isolation in Ulchin 3 and 4 nuclear power plants. The result indicates that the method for developing an optimal CCF defense strategy is effective
Effective sampling strategy to detect food and feed contamination
Bouzembrak, Yamine; Fels, van der Ine
2018-01-01
Sampling plans for food safety hazards are aimed to be used to determine whether a lot of food is contaminated (with microbiological or chemical hazards) or not. One of the components of sampling plans is the sampling strategy. The aim of this study was to compare the performance of three
Economic optimization of a global strategy to address the pandemic threat.
Pike, Jamison; Bogich, Tiffany; Elwood, Sarah; Finnoff, David C; Daszak, Peter
2014-12-30
Emerging pandemics threaten global health and economies and are increasing in frequency. Globally coordinated strategies to combat pandemics, similar to current strategies that address climate change, are largely adaptive, in that they attempt to reduce the impact of a pathogen after it has emerged. However, like climate change, mitigation strategies have been developed that include programs to reduce the underlying drivers of pandemics, particularly animal-to-human disease transmission. Here, we use real options economic modeling of current globally coordinated adaptation strategies for pandemic prevention. We show that they would be optimally implemented within 27 y to reduce the annual rise of emerging infectious disease events by 50% at an estimated one-time cost of approximately $343.7 billion. We then analyze World Bank data on multilateral "One Health" pandemic mitigation programs. We find that, because most pandemics have animal origins, mitigation is a more cost-effective policy than business-as-usual adaptation programs, saving between $344.0.7 billion and $360.3 billion over the next 100 y if implemented today. We conclude that globally coordinated pandemic prevention policies need to be enacted urgently to be optimally effective and that strategies to mitigate pandemics by reducing the impact of their underlying drivers are likely to be more effective than business as usual.
The topography of the environment alters the optimal search strategy for active particles
Volpe, Giorgio; Volpe, Giovanni
2017-10-01
In environments with scarce resources, adopting the right search strategy can make the difference between succeeding and failing, even between life and death. At different scales, this applies to molecular encounters in the cell cytoplasm, to animals looking for food or mates in natural landscapes, to rescuers during search and rescue operations in disaster zones, and to genetic computer algorithms exploring parameter spaces. When looking for sparse targets in a homogeneous environment, a combination of ballistic and diffusive steps is considered optimal; in particular, more ballistic Lévy flights with exponent α≤1 are generally believed to optimize the search process. However, most search spaces present complex topographies. What is the best search strategy in these more realistic scenarios? Here, we show that the topography of the environment significantly alters the optimal search strategy toward less ballistic and more Brownian strategies. We consider an active particle performing a blind cruise search for nonregenerating sparse targets in a 2D space with steps drawn from a Lévy distribution with the exponent varying from α=1 to α=2 (Brownian). We show that, when boundaries, barriers, and obstacles are present, the optimal search strategy depends on the topography of the environment, with α assuming intermediate values in the whole range under consideration. We interpret these findings using simple scaling arguments and discuss their robustness to varying searcher's size. Our results are relevant for search problems at different length scales from animal and human foraging to microswimmers' taxis to biochemical rates of reaction.
Novel strategies for sample preparation in forensic toxicology.
Samanidou, Victoria; Kovatsi, Leda; Fragou, Domniki; Rentifis, Konstantinos
2011-09-01
This paper provides a review of novel strategies for sample preparation in forensic toxicology. The review initially outlines the principle of each technique, followed by sections addressing each class of abused drugs separately. The novel strategies currently reviewed focus on the preparation of various biological samples for the subsequent determination of opiates, benzodiazepines, amphetamines, cocaine, hallucinogens, tricyclic antidepressants, antipsychotics and cannabinoids. According to our experience, these analytes are the most frequently responsible for intoxications in Greece. The applications of techniques such as disposable pipette extraction, microextraction by packed sorbent, matrix solid-phase dispersion, solid-phase microextraction, polymer monolith microextraction, stir bar sorptive extraction and others, which are rapidly gaining acceptance in the field of toxicology, are currently reviewed.
Li, Rui
2009-01-01
The target of this work is to extend the canonical Evolution Strategies (ES) from traditional real-valued parameter optimization domain to mixed-integer parameter optimization domain. This is necessary because there exist numerous practical optimization problems from industry in which the set of
International Nuclear Information System (INIS)
Kim, Heungseob; Kim, Pansoo
2017-01-01
To maximize the reliability of a system, the traditional reliability–redundancy allocation problem (RRAP) determines the component reliability and level of redundancy for each subsystem. This paper proposes an advanced RRAP that also considers the optimal redundancy strategy, either active or cold standby. In addition, new examples are presented for it. Furthermore, the exact reliability function for a cold standby redundant subsystem with an imperfect detector/switch is suggested, and is expected to replace the previous approximating model that has been used in most related studies. A parallel genetic algorithm for solving the RRAP as a mixed-integer nonlinear programming model is presented, and its performance is compared with those of previous studies by using numerical examples on three benchmark problems. - Highlights: • Optimal strategy is proposed to solve reliability redundancy allocation problem. • The redundancy strategy uses parallel genetic algorithm. • Improved reliability function for a cold standby subsystem is suggested. • Proposed redundancy strategy enhances the system reliability.
Optimal Search Strategy of Robotic Assembly Based on Neural Vibration Learning
Directory of Open Access Journals (Sweden)
Lejla Banjanovic-Mehmedovic
2011-01-01
Full Text Available This paper presents implementation of optimal search strategy (OSS in verification of assembly process based on neural vibration learning. The application problem is the complex robot assembly of miniature parts in the example of mating the gears of one multistage planetary speed reducer. Assembly of tube over the planetary gears was noticed as the most difficult problem of overall assembly. The favourable influence of vibration and rotation movement on compensation of tolerance was also observed. With the proposed neural-network-based learning algorithm, it is possible to find extended scope of vibration state parameter. Using optimal search strategy based on minimal distance path between vibration parameter stage sets (amplitude and frequencies of robots gripe vibration and recovery parameter algorithm, we can improve the robot assembly behaviour, that is, allow the fastest possible way of mating. We have verified by using simulation programs that search strategy is suitable for the situation of unexpected events due to uncertainties.
Directory of Open Access Journals (Sweden)
Santanu Biswas
Full Text Available Visceral leishmaniasis (VL is a deadly neglected tropical disease that poses a serious problem in various countries all over the world. Implementation of various intervention strategies fail in controlling the spread of this disease due to issues of parasite drug resistance and resistance of sandfly vectors to insecticide sprays. Due to this, policy makers need to develop novel strategies or resort to a combination of multiple intervention strategies to control the spread of the disease. To address this issue, we propose an extensive SIR-type model for anthroponotic visceral leishmaniasis transmission with seasonal fluctuations modeled in the form of periodic sandfly biting rate. Fitting the model for real data reported in South Sudan, we estimate the model parameters and compare the model predictions with known VL cases. Using optimal control theory, we study the effects of popular control strategies namely, drug-based treatment of symptomatic and PKDL-infected individuals, insecticide treated bednets and spray of insecticides on the dynamics of infected human and vector populations. We propose that the strategies remain ineffective in curbing the disease individually, as opposed to the use of optimal combinations of the mentioned strategies. Testing the model for different optimal combinations while considering periodic seasonal fluctuations, we find that the optimal combination of treatment of individuals and insecticide sprays perform well in controlling the disease for the time period of intervention introduced. Performing a cost-effective analysis we identify that the same strategy also proves to be efficacious and cost-effective. Finally, we suggest that our model would be helpful for policy makers to predict the best intervention strategies for specific time periods and their appropriate implementation for elimination of visceral leishmaniasis.
Optimal sampling plan for clean development mechanism energy efficiency lighting projects
International Nuclear Information System (INIS)
Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng
2013-01-01
Highlights: • A metering cost minimisation model is built to assist the sampling plan for CDM projects. • The model minimises the total metering cost by the determination of optimal sample size. • The required 90/10 criterion sampling accuracy is maintained. • The proposed metering cost minimisation model is applicable to other CDM projects as well. - Abstract: Clean development mechanism (CDM) project developers are always interested in achieving required measurement accuracies with the least metering cost. In this paper, a metering cost minimisation model is proposed for the sampling plan of a specific CDM energy efficiency lighting project. The problem arises from the particular CDM sampling requirement of 90% confidence and 10% precision for the small-scale CDM energy efficiency projects, which is known as the 90/10 criterion. The 90/10 criterion can be met through solving the metering cost minimisation problem. All the lights in the project are classified into different groups according to uncertainties of the lighting energy consumption, which are characterised by their statistical coefficient of variance (CV). Samples from each group are randomly selected to install power meters. These meters include less expensive ones with less functionality and more expensive ones with greater functionality. The metering cost minimisation model will minimise the total metering cost through the determination of the optimal sample size at each group. The 90/10 criterion is formulated as constraints to the metering cost objective. The optimal solution to the minimisation problem will therefore minimise the metering cost whilst meeting the 90/10 criterion, and this is verified by a case study. Relationships between the optimal metering cost and the population sizes of the groups, CV values and the meter equipment cost are further explored in three simulations. The metering cost minimisation model proposed for lighting systems is applicable to other CDM projects as
Potential-Decomposition Strategy in Markov Chain Monte Carlo Sampling Algorithms
International Nuclear Information System (INIS)
Shangguan Danhua; Bao Jingdong
2010-01-01
We introduce the potential-decomposition strategy (PDS), which can he used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in phase space, then, by rejecting some trial samples, the target distributions can be sampled in an unbiased manner. Furthermore, if the accepted trial samples are insufficient, they can be recycled as initial states to form more unbiased samples. This strategy can greatly improve efficiency when the original potential has multiple metastable states separated by large barriers. We apply PDS to the 2d Ising model and a double-well potential model with a large barrier, demonstrating in these two representative examples that convergence is accelerated by orders of magnitude.
Eye Movements Reveal Optimal Strategies for Analogical Reasoning.
Vendetti, Michael S; Starr, Ariel; Johnson, Elizabeth L; Modavi, Kiana; Bunge, Silvia A
2017-01-01
Analogical reasoning refers to the process of drawing inferences on the basis of the relational similarity between two domains. Although this complex cognitive ability has been the focus of inquiry for many years, most models rely on measures that cannot capture individuals' thought processes moment by moment. In the present study, we used participants' eye movements to investigate reasoning strategies in real time while solving visual propositional analogy problems (A:B::C:D). We included both a semantic and a perceptual lure on every trial to determine how these types of distracting information influence reasoning strategies. Participants spent more time fixating the analogy terms and the target relative to the other response choices, and made more saccades between the A and B items than between any other items. Participants' eyes were initially drawn to perceptual lures when looking at response choices, but they nonetheless performed the task accurately. We used participants' gaze sequences to classify each trial as representing one of three classic analogy problem solving strategies and related strategy usage to analogical reasoning performance. A project-first strategy, in which participants first extrapolate the relation between the AB pair and then generalize that relation for the C item, was both the most commonly used strategy as well as the optimal strategy for solving visual analogy problems. These findings provide new insight into the role of strategic processing in analogical problem solving.
Eye Movements Reveal Optimal Strategies for Analogical Reasoning
Directory of Open Access Journals (Sweden)
Michael S. Vendetti
2017-06-01
Full Text Available Analogical reasoning refers to the process of drawing inferences on the basis of the relational similarity between two domains. Although this complex cognitive ability has been the focus of inquiry for many years, most models rely on measures that cannot capture individuals' thought processes moment by moment. In the present study, we used participants' eye movements to investigate reasoning strategies in real time while solving visual propositional analogy problems (A:B::C:D. We included both a semantic and a perceptual lure on every trial to determine how these types of distracting information influence reasoning strategies. Participants spent more time fixating the analogy terms and the target relative to the other response choices, and made more saccades between the A and B items than between any other items. Participants' eyes were initially drawn to perceptual lures when looking at response choices, but they nonetheless performed the task accurately. We used participants' gaze sequences to classify each trial as representing one of three classic analogy problem solving strategies and related strategy usage to analogical reasoning performance. A project-first strategy, in which participants first extrapolate the relation between the AB pair and then generalize that relation for the C item, was both the most commonly used strategy as well as the optimal strategy for solving visual analogy problems. These findings provide new insight into the role of strategic processing in analogical problem solving.
Combined optimization model for sustainable energization strategy
Abtew, Mohammed Seid
Access to energy is a foundation to establish a positive impact on multiple aspects of human development. Both developed and developing countries have a common concern of achieving a sustainable energy supply to fuel economic growth and improve the quality of life with minimal environmental impacts. The Least Developing Countries (LDCs), however, have different economic, social, and energy systems. Prevalence of power outage, lack of access to electricity, structural dissimilarity between rural and urban regions, and traditional fuel dominance for cooking and the resultant health and environmental hazards are some of the distinguishing characteristics of these nations. Most energy planning models have been designed for developed countries' socio-economic demographics and have missed the opportunity to address special features of the poor countries. An improved mixed-integer programming energy-source optimization model is developed to address limitations associated with using current energy optimization models for LDCs, tackle development of the sustainable energization strategies, and ensure diversification and risk management provisions in the selected energy mix. The Model predicted a shift from traditional fuels reliant and weather vulnerable energy source mix to a least cost and reliable modern clean energy sources portfolio, a climb on the energy ladder, and scored multifaceted economic, social, and environmental benefits. At the same time, it represented a transition strategy that evolves to increasingly cleaner energy technologies with growth as opposed to an expensive solution that leapfrogs immediately to the cleanest possible, overreaching technologies.
Optimal Bidding Strategy for Renewable Microgrid with Active Network Management
Directory of Open Access Journals (Sweden)
Seung Wan Kim
2016-01-01
Full Text Available Active Network Management (ANM enables a microgrid to optimally dispatch the active/reactive power of its Renewable Distributed Generation (RDG and Battery Energy Storage System (BESS units in real time. Thus, a microgrid with high penetration of RDGs can handle their uncertainties and variabilities to achieve the stable operation using ANM. However, the actual power flow in the line connecting the main grid and microgrid may deviate significantly from the day-ahead bids if the bids are determined without consideration of the real-time adjustment through ANM, which will lead to a substantial imbalance cost. Therefore, this study proposes a formulation for obtaining an optimal bidding which reflects the change of power flow in the connecting line by real-time adjustment using ANM. The proposed formulation maximizes the expected profit of the microgrid considering various network and physical constraints. The effectiveness of the proposed bidding strategy is verified through the simulations with a 33-bus test microgrid. The simulation results show that the proposed bidding strategy improves the expected operating profit by reducing the imbalance cost to a greater degree compared to the basic bidding strategy without consideration of ANM.
Nuclear Power Plant Outage Optimization Strategy. 2016 Edition
International Nuclear Information System (INIS)
2016-10-01
This publication is an update of IAEA-TECDOC-1315, Nuclear Power Plant Outage Optimisation Strategy, which was published in 2002, and aims to communicate good outage management practices in a manner that can be used by operators and utilities in Member States. Nuclear power plant outage management is a key factor for safe and economic nuclear power plant performance. This publication discusses plant outage strategy and how this strategy is actually implemented. The main areas that are important for outage optimization that were identified by the utilities and government organizations participating in this report are: 1) organization and management; 2) outage planning and preparation; 3) outage execution; 4) safety outage review; and 5) counter measures to avoid the extension of outages and to facilitate the work in forced outages. Good outage management practices cover many different areas of work and this publication aims to communicate these good practices in a way that they can be used effectively by operators and utilities
Schaffer, Miroslava; Mahamid, Julia; Engel, Benjamin D; Laugks, Tim; Baumeister, Wolfgang; Plitzko, Jürgen M
2017-02-01
While cryo-electron tomography (cryo-ET) can reveal biological structures in their native state within the cellular environment, it requires the production of high-quality frozen-hydrated sections that are thinner than 300nm. Sample requirements are even more stringent for the visualization of membrane-bound protein complexes within dense cellular regions. Focused ion beam (FIB) sample preparation for transmission electron microscopy (TEM) is a well-established technique in material science, but there are only few examples of biological samples exhibiting sufficient quality for high-resolution in situ investigation by cryo-ET. In this work, we present a comprehensive description of a cryo-sample preparation workflow incorporating additional conductive-coating procedures. These coating steps eliminate the adverse effects of sample charging on imaging with the Volta phase plate, allowing data acquisition with improved contrast. We discuss optimized FIB milling strategies adapted from material science and each critical step required to produce homogeneously thin, non-charging FIB lamellas that make large areas of unperturbed HeLa and Chlamydomonas cells accessible for cryo-ET at molecular resolution. Copyright © 2016 Elsevier Inc. All rights reserved.
Investigating the Optimal Management Strategy for a Healthcare Facility Maintenance Program
National Research Council Canada - National Science Library
Gaillard, Daria
2004-01-01
...: strategic partnering with an equipment management firm. The objective of this study is to create a decision-model for selecting the optimal management strategy for a healthcare organization's facility maintenance program...
Directory of Open Access Journals (Sweden)
Zhang Fengjiao
2015-03-01
Full Text Available Optimization of the control strategy plays an important role in improving the performance of electric vehicles. In order to improve the braking stability and recover the braking energy, a multi-objective genetic algorithm is applied to optimize the key parameters in the control strategy of electric vehicle electro-hydraulic composite braking system. Various limitations are considered in the optimization process, and the optimization results are verified by a software simulation platform of electric vehicle regenerative braking system in typical brake conditions. The results show that optimization objectives achieved a good astringency, and the optimized control strategy can increase the brake energy recovery effectively under the condition of ensuring the braking stability.
Geometric Process-Based Maintenance and Optimization Strategy for the Energy Storage Batteries
Directory of Open Access Journals (Sweden)
Yan Li
2016-01-01
Full Text Available Renewable energy is critical for improving energy structure and reducing environment pollution. But its strong fluctuation and randomness have a serious effect on the stability of the microgrid without the coordination of the energy storage batteries. The main factors that influence the development of the energy storage system are the lack of valid operation and maintenance management as well as the cost control. By analyzing the typical characteristics of the energy storage batteries in their life cycle, the geometric process-based model including the deteriorating system and the improving system is firstly built for describing the operation process, the preventive maintenance process, and the corrective maintenance process. In addition, this paper proposes an optimized management strategy, which aims to minimize the long-run average cost of the energy storage batteries by defining the time interval of the detection and preventive maintenance process as well as the optimal corrective maintenance times, subjected to the state of health and the reliability conditions. The simulation is taken under the built model by applying the proposed energy storage batteries’ optimized management strategy, which verifies the effectiveness and applicability of the management strategy, denoting its obvious practicality on the current application.
Long-term strategic asset allocation: An out-of-sample evaluation
Diris, B.F.; Palm, F.C.; Schotman, P.C.
We evaluate the out-of-sample performance of a long-term investor who follows an optimized dynamic trading strategy. Although the dynamic strategy is able to benefit from predictability out-of-sample, a short-term investor using a single-period market timing strategy would have realized an almost
Optimal Control Strategy Search Using a Simplest 3-D PWR Xenon Oscillation Simulator
International Nuclear Information System (INIS)
Yoichiro, Shimazu
2004-01-01
Power spatial oscillations due to the transient xenon spatial distribution are well known as xenon oscillation in large PWRs. When the reactor size becomes larger than the current design, then even radial oscillations can be also divergent. Even if the radial oscillation is convergent, when some control rods malfunction occurs, it is necessary to suppress the oscillation in as short time as possible. In such cases, optimal control strategy is required. Generally speaking the optimality search based on the modern control theory requires a lot of calculation for the evaluation of state variables. In the case of control rod malfunctions the xenon oscillation could be three dimensional. In such case, direct core calculations would be inevitable. From this point of view a very simple model, only four point reactor model, has been developed and verified. In this paper, an example of a procedure and the results for optimal control strategy search are presented. It is shown that we have only one optimal strategy within a half cycle of the oscillation with fixed control strength. It is also shown that a 3-D xenon oscillation introduced by a control rod malfunction can not be controlled by only one control step as can be done for axial oscillations. They might be quite strong limitations to the operators. Thus it is recommended that a strategy generator, which is quick in analyzing and easy to use, might be installed in a monitoring system or operator guiding system. (author)
Energy Technology Data Exchange (ETDEWEB)
Aha, Ulrich
2013-07-01
Maintenance strategies are aimed to keep a technical facility functioning in spite of damaging processes (wear, corrosion, fatigue) with simultaneous control of these processes. The project optimization of maintenance strategies in case of data uncertainties is aimed to optimize maintenance measures like preventive measures (lubrication etc.), inspections and replacements to keep the facility/plant operating including the minimization of financial costs. The report covers the following topics: modeling assumptions, model development and optimization procedure, results for a conventional power plant and an oxyfuel plant.
Muratore-Ginanneschi, Paolo
2005-05-01
Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.
Optimization as a Reasoning Strategy for Dealing with Socioscientific Decision-Making Situations
Papadouris, Nicos
2012-01-01
This paper reports on an attempt to help 12-year-old students develop a specific optimization strategy for selecting among possible solutions in socioscientific decision-making situations. We have developed teaching and learning materials for elaborating this strategy, and we have implemented them in two intact classes (N = 48). Prior to and after…
Ndeffo Mbah , Martial L.; Gilligan , Christopher A.
2010-01-01
Abstract There is growing interest in incorporating economic factors into epidemiological models in order to identify optimal strategies for disease control when resources are limited. In this paper we consider how to optimize the control of a pathogen that is capable of infecting multiple hosts with different rates of transmission within and between species. Our objective is to find control strategies that maximize the discounted number of healthy individuals. We consider two clas...
Applying the Taguchi method to river water pollution remediation strategy optimization.
Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju
2014-04-15
Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.
A preliminary evaluation of comminution and sampling strategies for radioactive cemented waste
Energy Technology Data Exchange (ETDEWEB)
Bilodeau, M.; Lastra, R.; Bouzoubaa, N. [Natural Resources Canada, Ottawa, ON (Canada); Chapman, M. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)
2011-07-01
Lixiviation of Hg, U and Cs contaminants and micro-encapsulation of cemented radioactive waste (CRW) are the two main components of a CRW stabilization research project carried out at Natural Resources Canada in collaboration with Atomic Energy of Canada Limited. Unmolding CRW from the storage pail, its fragmentation into a size range suitable for both processes and the collection of a representative sample are three essential steps for providing optimal material conditions for the two studies. Separation of wires, metals and plastic incorporated into CRW samples is also required. A comminution and sampling strategy was developed to address all those needs. Dust emissions and other health and safety concerns were given full consideration. Surrogate cemented waste (SCW) was initially used for this comminution study where Cu was used as a substitute for U and Hg. SCW was characterized as a friable material through the measurement of the Bond work index of 7.7 kWh/t. A mineralogical investigation and the calibration of material heterogeneity parameters of the sampling error model showed that Cu, Hg and Cs are finely disseminated in the cement matrix. A sampling strategy was built from the model and successfully validated with radioactive waste. A larger than expected sampling error was observed with U due to the formation of large U solid phases, which were not observed with the Cu tracer. SCW samples were crushed and ground under different rock fragmentation mechanisms: compression (jaw and cone crushers, rod mill), impact (ball mill), attrition, high voltage disintegration and high pressure water (and liquid nitrogen) jetting. Cryogenic grinding was also tested with the attrition mill. Crushing and grinding technologies were assessed against criteria that were gathered from literature surveys, experiential know-how and discussion with the client and field experts. Water jetting and its liquid nitrogen variant were retained for pail cutting and waste unmolding while
A preliminary evaluation of comminution and sampling strategies for radioactive cemented waste
International Nuclear Information System (INIS)
Bilodeau, M.; Lastra, R.; Bouzoubaa, N.; Chapman, M.
2011-01-01
Lixiviation of Hg, U and Cs contaminants and micro-encapsulation of cemented radioactive waste (CRW) are the two main components of a CRW stabilization research project carried out at Natural Resources Canada in collaboration with Atomic Energy of Canada Limited. Unmolding CRW from the storage pail, its fragmentation into a size range suitable for both processes and the collection of a representative sample are three essential steps for providing optimal material conditions for the two studies. Separation of wires, metals and plastic incorporated into CRW samples is also required. A comminution and sampling strategy was developed to address all those needs. Dust emissions and other health and safety concerns were given full consideration. Surrogate cemented waste (SCW) was initially used for this comminution study where Cu was used as a substitute for U and Hg. SCW was characterized as a friable material through the measurement of the Bond work index of 7.7 kWh/t. A mineralogical investigation and the calibration of material heterogeneity parameters of the sampling error model showed that Cu, Hg and Cs are finely disseminated in the cement matrix. A sampling strategy was built from the model and successfully validated with radioactive waste. A larger than expected sampling error was observed with U due to the formation of large U solid phases, which were not observed with the Cu tracer. SCW samples were crushed and ground under different rock fragmentation mechanisms: compression (jaw and cone crushers, rod mill), impact (ball mill), attrition, high voltage disintegration and high pressure water (and liquid nitrogen) jetting. Cryogenic grinding was also tested with the attrition mill. Crushing and grinding technologies were assessed against criteria that were gathered from literature surveys, experiential know-how and discussion with the client and field experts. Water jetting and its liquid nitrogen variant were retained for pail cutting and waste unmolding while
Economic optimization of a global strategy to address the pandemic threat
Pike, Jamison; Bogich, Tiffany; Elwood, Sarah; Finnoff, David C.; Daszak, Peter
2014-01-01
Emerging pandemics threaten global health and economies and are increasing in frequency. Globally coordinated strategies to combat pandemics, similar to current strategies that address climate change, are largely adaptive, in that they attempt to reduce the impact of a pathogen after it has emerged. However, like climate change, mitigation strategies have been developed that include programs to reduce the underlying drivers of pandemics, particularly animal-to-human disease transmission. Here, we use real options economic modeling of current globally coordinated adaptation strategies for pandemic prevention. We show that they would be optimally implemented within 27 y to reduce the annual rise of emerging infectious disease events by 50% at an estimated one-time cost of approximately $343.7 billion. We then analyze World Bank data on multilateral “One Health” pandemic mitigation programs. We find that, because most pandemics have animal origins, mitigation is a more cost-effective policy than business-as-usual adaptation programs, saving between $344.0.7 billion and $360.3 billion over the next 100 y if implemented today. We conclude that globally coordinated pandemic prevention policies need to be enacted urgently to be optimally effective and that strategies to mitigate pandemics by reducing the impact of their underlying drivers are likely to be more effective than business as usual. PMID:25512538
Heinsch, Stephen C; Das, Siba R; Smanski, Michael J
2018-01-01
Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems.
Multi-Objective Optimization of a Hybrid ESS Based on Optimal Energy Management Strategy for LHDs
Directory of Open Access Journals (Sweden)
Jiajun Liu
2017-10-01
Full Text Available Energy storage systems (ESS play an important role in the performance of mining vehicles. A hybrid ESS combining both batteries (BTs and supercapacitors (SCs is one of the most promising solutions. As a case study, this paper discusses the optimal hybrid ESS sizing and energy management strategy (EMS of 14-ton underground load-haul-dump vehicles (LHDs. Three novel contributions are added to the relevant literature. First, a multi-objective optimization is formulated regarding energy consumption and the total cost of a hybrid ESS, which are the key factors of LHDs, and a battery capacity degradation model is used. During the process, dynamic programming (DP-based EMS is employed to obtain the optimal energy consumption and hybrid ESS power profiles. Second, a 10-year life cycle cost model of a hybrid ESS for LHDs is established to calculate the total cost, including capital cost, operating cost, and replacement cost. According to the optimization results, three solutions chosen from the Pareto front are compared comprehensively, and the optimal one is selected. Finally, the optimal and battery-only options are compared quantitatively using the same objectives, and the hybrid ESS is found to be a more economical and efficient option.
Patil, A A; Sachin, B S; Shinde, D B; Wakte, P S
2013-07-01
Coumestan wedelolactone is an important phytocomponent from Eclipta alba (L.) Hassk. It possesses diverse pharmacological activities, which have prompted the development of various extraction techniques and strategies for its better utilization. The aim of the present study is to develop and optimize supercritical carbon dioxide assisted sample preparation and HPLC identification of wedelolactone from E. alba (L.) Hassk. The response surface methodology was employed to study the optimization of sample preparation using supercritical carbon dioxide for wedelolactone from E. alba (L.) Hassk. The optimized sample preparation involves the investigation of quantitative effects of sample preparation parameters viz. operating pressure, temperature, modifier concentration and time on yield of wedelolactone using Box-Behnken design. The wedelolactone content was determined using validated HPLC methodology. The experimental data were fitted to second-order polynomial equation using multiple regression analysis and analyzed using the appropriate statistical method. By solving the regression equation and analyzing 3D plots, the optimum extraction conditions were found to be: extraction pressure, 25 MPa; temperature, 56 °C; modifier concentration, 9.44% and extraction time, 60 min. Optimum extraction conditions demonstrated wedelolactone yield of 15.37 ± 0.63 mg/100 g E. alba (L.) Hassk, which was in good agreement with the predicted values. Temperature and modifier concentration showed significant effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction method. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
International Nuclear Information System (INIS)
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2017-01-01
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarm rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.
Multi-objective Optimization Strategies Using Adjoint Method and Game Theory in Aerodynamics
Tang, Zhili
2006-08-01
There are currently three different game strategies originated in economics: (1) Cooperative games (Pareto front), (2) Competitive games (Nash game) and (3) Hierarchical games (Stackelberg game). Each game achieves different equilibria with different performance, and their players play different roles in the games. Here, we introduced game concept into aerodynamic design, and combined it with adjoint method to solve multi-criteria aerodynamic optimization problems. The performance distinction of the equilibria of these three game strategies was investigated by numerical experiments. We computed Pareto front, Nash and Stackelberg equilibria of the same optimization problem with two conflicting and hierarchical targets under different parameterizations by using the deterministic optimization method. The numerical results show clearly that all the equilibria solutions are inferior to the Pareto front. Non-dominated Pareto front solutions are obtained, however the CPU cost to capture a set of solutions makes the Pareto front an expensive tool to the designer.
Multi-objective optimization strategies using adjoint method and game theory in aerodynamics
Institute of Scientific and Technical Information of China (English)
Zhili Tang
2006-01-01
There are currently three different game strategies originated in economics:(1) Cooperative games (Pareto front),(2)Competitive games (Nash game) and (3)Hierarchical games (Stackelberg game).Each game achieves different equilibria with different performance,and their players play different roles in the games.Here,we introduced game concept into aerodynamic design, and combined it with adjoint method to solve multicriteria aerodynamic optimization problems.The performance distinction of the equilibria of these three game strategies was investigated by numerical experiments.We computed Pareto front, Nash and Stackelberg equilibria of the same optimization problem with two conflicting and hierarchical targets under different parameterizations by using the deterministic optimization method.The numerical results show clearly that all the equilibria solutions are inferior to the Pareto front.Non-dominated Pareto front solutions are obtained,however the CPU cost to capture a set of solutions makes the Pareto front an expensive tool to the designer.
An Umeclidinium membrane sensor; Two-step optimization strategy for improved responses.
Yehia, Ali M; Monir, Hany H
2017-09-01
In the scientific context of membrane sensors and improved experimentation, we devised an experimentally designed protocol for sensor optimization. Two-step strategy was implemented for Umeclidinium bromide (UMEC) analysis which is a novel quinuclidine-based muscarinic antagonist used for maintenance treatment of symptoms accompanied with chronic obstructive pulmonary disease. In the first place, membrane components were screened for ideal ion exchanger, ionophore and plasticizer using three categorical factors at three levels in Taguchi design. Secondly, experimentally designed optimization was followed in order to tune the sensor up for finest responses. Twelve experiments were randomly carried out in a continuous factor design. Nernstian response, detection limit and selectivity were assigned as responses in these designs. The optimized membrane sensor contained tetrakis-[3,5-bis(trifluoro- methyl)phenyl] borate (0.44wt%) and calix[6]arene (0.43wt%) in 50.00% PVC plasticized with 49.13wt% 2-ni-tro-phenyl octylether. This sensor, along with an optimum concentration of inner filling solution (2×10 -4 molL -1 UMEC) and 2h of soaking time, attained the design objectives. Nernstian response approached 59.7mV/decade and detection limit decreased by about two order of magnitude (8×10 -8 mol L -1 ) through this optimization protocol. The proposed sensor was validated for UMEC determination in its linear range (3.16×10 -7 -1×10 -3 mol L -1 ) and challenged for selective discrimination of other congeners and inorganic cations. Results of INCRUSE ELLIPTA ® inhalation powder analyses obtained from the proposed sensor and manufacturer's UPLC were statistically compared. Moreover the proposed sensor was successfully used for the determination of UMEC in plasma samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Mohammad Aghaei; Amin Asadollahi; Elham Vahedi; Mahdi Pirooz
2013-01-01
To maintain and achieve optimal growth, development and to be more competitive, organizations need a comprehensive and coherent plan compatible with their objectives and goals which is called strategic planning. This research aims to analyse strategically “Etka Chain Stores” and to propose optimal strategies by using SWOT model and based on fuzzy logic. The scope of this research is limited to “Etka Chain stores in Tehran”. As instrumentation, a questioner, consisting of 138 questions, was us...
Determining an energy-optimal thermal management strategy for electric driven vehicles
Energy Technology Data Exchange (ETDEWEB)
Suchaneck, Andre; Probst, Tobias; Puente Leon, Fernando [Karlsruher Institut fuer Technology (KIT), Karlsruhe (Germany). Inst. of Industrial Information Technology (IIIT)
2012-11-01
In electric, hybrid electric and fuel cell vehicles, thermal management may have a significant impact on vehicle range. Therefore, optimal thermal management strategies are required. In this paper a method for determining an energy-optimal control strategy for thermal power generation in electric driven vehicles is presented considering all controlled devices (pumps, valves, fans, and the like) as well as influences like ambient temperature, vehicle speed, motor and battery and cooling cycle temperatures. The method is designed to be generic to increase the thermal management development process speed and to achieve the maximal energy reduction for any electric driven vehicle (e.g., by waste heat utilization). Based on simulations of a prototype electric vehicle with an advanced cooling cycle structure, the potential of the method is shown. (orig.)
Ferrer-Paris, José Rafael; Sánchez-Mercado, Ada; Rodríguez, Jon Paul
2013-03-01
The development of efficient sampling protocols is an essential prerequisite to evaluate and identify priority conservation areas. There are f ew protocols for fauna inventory and monitoring in wide geographical scales for the tropics, where the complexity of communities and high biodiversity levels, make the implementation of efficient protocols more difficult. We proposed here a simple strategy to optimize the capture of dung beetles, applied to sampling with baited traps and generalizable to other sampling methods. We analyzed data from eight transects sampled between 2006-2008 withthe aim to develop an uniform sampling design, that allows to confidently estimate species richness, abundance and composition at wide geographical scales. We examined four characteristics of any sampling design that affect the effectiveness of the sampling effort: the number of traps, sampling duration, type and proportion of bait, and spatial arrangement of the traps along transects. We used species accumulation curves, rank-abundance plots, indicator species analysis, and multivariate correlograms. We captured 40 337 individuals (115 species/morphospecies of 23 genera). Most species were attracted by both dung and carrion, but two thirds had greater relative abundance in traps baited with human dung. Different aspects of the sampling design influenced each diversity attribute in different ways. To obtain reliable richness estimates, the number of traps was the most important aspect. Accurate abundance estimates were obtained when the sampling period was increased, while the spatial arrangement of traps was determinant to capture the species composition pattern. An optimum sampling strategy for accurate estimates of richness, abundance and diversity should: (1) set 50-70 traps to maximize the number of species detected, (2) get samples during 48-72 hours and set trap groups along the transect to reliably estimate species abundance, (3) set traps in groups of at least 10 traps to
Directory of Open Access Journals (Sweden)
Shouheng Tuo
2013-01-01
Full Text Available Harmony search (HS algorithm is an emerging population-based metaheuristic algorithm, which is inspired by the music improvisation process. The HS method has been developed rapidly and applied widely during the past decade. In this paper, an improved global harmony search algorithm, named harmony search based on teaching-learning (HSTL, is presented for high dimension complex optimization problems. In HSTL algorithm, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation are employed to maintain the proper balance between convergence and population diversity, and dynamic strategy is adopted to change the parameters. The proposed HSTL algorithm is investigated and compared with three other state-of-the-art HS optimization algorithms. Furthermore, to demonstrate the robustness and convergence, the success rate and convergence analysis is also studied. The experimental results of 31 complex benchmark functions demonstrate that the HSTL method has strong convergence and robustness and has better balance capacity of space exploration and local exploitation on high dimension complex optimization problems.
OPTIMAL METHOD FOR PREPARATION OF SILICATE ROCK SAMPLES FOR ANALYTICAL PURPOSES
Directory of Open Access Journals (Sweden)
Maja Vrkljan
2004-12-01
Full Text Available The purpose of this study was to determine an optimal dissolution method for silicate rock samples for further analytical purposes. Analytical FAAS method of determining cobalt, chromium, copper, nickel, lead and zinc content in gabbro sample and geochemical standard AGV-1 has been applied for verification. Dissolution in mixtures of various inorganic acids has been tested, as well as Na2CO3 fusion technique. The results obtained by different methods have been compared and dissolution in the mixture of HNO3 + HF has been recommended as optimal.
International Nuclear Information System (INIS)
Voitsekhovych, Oleg V.; Lavrova, Tatiana V.; Kostezh, Alexander B.
2012-01-01
There are many sites in the world, where Environment are still under influence of the contamination related to the Uranium production carried out in past. Author's experience shows that lack of site characterization data, incomplete or unreliable environment monitoring studies can significantly limit quality of Safety Assessment procedures and Priority actions analyses needed for Remediation Planning. During recent decades the analytical laboratories of the many enterprises, currently being responsible for establishing the site specific environment monitoring program have been significantly improved their technical sampling and analytical capacities. However, lack of experience in the optimal site specific sampling strategy planning and also not enough experience in application of the required analytical techniques, such as modern alpha-beta radiometers, gamma and alpha spectrometry and liquid-scintillation analytical methods application for determination of U-Th series radionuclides in the environment, does not allow to these laboratories to develop and conduct efficiently the monitoring programs as a basis for further Safety Assessment in decision making procedures. This paper gives some conclusions, which were gained from the experience establishing monitoring programs in Ukraine and also propose some practical steps on optimization in sampling strategy planning and analytical procedures to be applied for the area required Safety assessment and justification for its potential remediation and safe management. (authors)
Optimal mission planning of GEO on-orbit refueling in mixed strategy
Chen, Xiao-qian; Yu, Jing
2017-04-01
The mission planning of GEO on-orbit refueling (OOR) in Mixed strategy is studied in this paper. Specifically, one SSc will be launched to an orbital slot near the depot when multiple GEO satellites are reaching their end of lives. The SSc replenishes fuel from the depot and then extends the lifespan of the target satellites via refueling. In the mixed scenario, only some of the target satellites could be served by the SSc, and the remaining ones will be fueled by Pseudo SScs (the target satellite which has already been refueled by the SSc and now has sufficient fuel for its operation as well as the fuel to refuel other target satellites is called Pseudo SSc here). The mission sequences and fuel mass of the SSc and Pseudo SScs, the dry mass of the SSc are used as design variables, whereas the economic benefit of the whole mission is used as design objective. The economic cost and benefit models are stated first, and then a mathematical optimization model is proposed. A comprehensive solution method involving enumeration, particle swarm optimization and modification is developed. Numerical examples are carried out to demonstrate the effectiveness of the model and solution method. Economic efficiencies of different OOR strategies are compared and discussed. The mixed strategy would perform better than the other strategies only when the target satellites satisfy some conditions. This paper presents an available mixed strategy scheme for users and analyzes its advantages and disadvantages by comparing with some other OOR strategies, providing helpful references to decision makers. The best strategy in practical applications depends on the specific demands and user preference.
Energy Technology Data Exchange (ETDEWEB)
Homem-de-Mello, Tito [University of Illinois at Chicago, Department of Mechanical and Industrial Engineering, Chicago, IL (United States); Matos, Vitor L. de; Finardi, Erlon C. [Universidade Federal de Santa Catarina, LabPlan - Laboratorio de Planejamento de Sistemas de Energia Eletrica, Florianopolis (Brazil)
2011-03-15
The long-term hydrothermal scheduling is one of the most important problems to be solved in the power systems area. This problem aims to obtain an optimal policy, under water (energy) resources uncertainty, for hydro and thermal plants over a multi-annual planning horizon. It is natural to model the problem as a multi-stage stochastic program, a class of models for which algorithms have been developed. The original stochastic process is represented by a finite scenario tree and, because of the large number of stages, a sampling-based method such as the Stochastic Dual Dynamic Programming (SDDP) algorithm is required. The purpose of this paper is two-fold. Firstly, we study the application of two alternative sampling strategies to the standard Monte Carlo - namely, Latin hypercube sampling and randomized quasi-Monte Carlo - for the generation of scenario trees, as well as for the sampling of scenarios that is part of the SDDP algorithm. Secondly, we discuss the formulation of stopping criteria for the optimization algorithm in terms of statistical hypothesis tests, which allows us to propose an alternative criterion that is more robust than that originally proposed for the SDDP. We test these ideas on a problem associated with the whole Brazilian power system, with a three-year planning horizon. (orig.)
A Regional Time-of-Use Electricity Price Based Optimal Charging Strategy for Electrical Vehicles
Directory of Open Access Journals (Sweden)
Jun Yang
2016-08-01
Full Text Available With the popularization of electric vehicles (EVs, the out-of-order charging behaviors of large numbers of EVs will bring new challenges to the safe and economic operation of power systems. This paper studies an optimal charging strategy for EVs. For that a typical urban zone is divided into four regions, a regional time-of-use (RTOU electricity price model is proposed to guide EVs when and where to charge considering spatial and temporal characteristics. In light of the elastic coefficient, the user response to the RTOU electricity price is analyzed, and also a bilayer optimization charging strategy including regional-layer and node-layer models is suggested to schedule the EVs. On the one hand, the regional layer model is designed to coordinate the EVs located in different time and space. On the other hand, the node layer model is built to schedule the EVs to charge in certain nodes. According to the simulations of an IEEE 33-bus distribution network, the performance of the proposed optimal charging strategy is verified. The results demonstrate that the proposed bilayer optimization strategy can effectively decrease the charging cost of users, mitigate the peak-valley load difference and the network loss. Besides, the RTOU electricity price shows better performance than the time-of-use (TOU electricity price.
DEFF Research Database (Denmark)
Mohanty, Sankhya; Hattel, Jesper Henri
2016-01-01
. A multilevel optimization strategy is adopted using a customized genetic algorithm developed for optimizing cellular scanning strategy for selective laser melting, with an objective of reducing residual stresses and deformations. The resulting thermo-mechanically optimized cellular scanning strategies......, a calibrated, fast, multiscale thermal model coupled with a 3D finite element mechanical model is used to simulate residual stress formation and deformations during selective laser melting. The resulting reduction in thermal model computation time allows evolutionary algorithm-based optimization of the process...
Optimal sampling schemes for vegetation and geological field visits
CSIR Research Space (South Africa)
Debba, Pravesh
2012-07-01
Full Text Available The presentation made to Wits Statistics Department was on common classification methods used in the field of remote sensing, and the use of remote sensing to design optimal sampling schemes for field visits with applications in vegetation...
The Development and Empirical Validation of an E-based Supply Chain Strategy Optimization Model
DEFF Research Database (Denmark)
Kotzab, Herbert; Skjoldager, Niels; Vinum, Thorkil
2003-01-01
Examines the formulation of supply chain strategies in complex environments. Argues that current state‐of‐the‐art e‐business and supply chain management, combined into the concept of e‐SCM, as well as the use of transaction cost theory, network theory and resource‐based theory, altogether can...... be used to form a model for analyzing supply chains with the purpose of reducing the uncertainty of formulating supply chain strategies. Presents e‐supply chain strategy optimization model (e‐SOM) as a way to analyze supply chains in a structured manner as regards strategic preferences for supply chain...... design, relations and resources in the chains with the ultimate purpose of enabling the formulation of optimal, executable strategies for specific supply chains. Uses research results for a specific supply chain to validate the usefulness of the model....
Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model
DEFF Research Database (Denmark)
Kirkegaard, Poul Henning
1993-01-01
Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...
Ghose, Sanchayita; Nagrath, Deepak; Hubbard, Brian; Brooks, Clayton; Cramer, Steven M
2004-01-01
The effect of an alternate strategy employing two different flowrates during loading was explored as a means of increasing system productivity in Protein-A chromatography. The effect of such a loading strategy was evaluated using a chromatographic model that was able to accurately predict experimental breakthrough curves for this Protein-A system. A gradient-based optimization routine is carried out to establish the optimal loading conditions (initial and final flowrates and switching time). The two-step loading strategy (using a higher flowrate during the initial stages followed by a lower flowrate) was evaluated for an Fc-fusion protein and was found to result in significant improvements in process throughput. In an extension of this optimization routine, dynamic loading capacity and productivity were simultaneously optimized using a weighted objective function, and this result was compared to that obtained with the single flowrate. Again, the dual-flowrate strategy was found to be superior.
Survey Strategy Optimization for the Atacama Cosmology Telescope
De Bernardis, F.; Stevens, J. R.; Hasselfield, M.; Alonso, D.; Bond, J. R.; Calabrese, E.; Choi, S. K.; Crowley, K. T.; Devlin, M.; Wollack, E. J.
2016-01-01
In recent years there have been significant improvements in the sensitivity and the angular resolution of the instruments dedicated to the observation of the Cosmic Microwave Background (CMB). ACTPol is the first polarization receiver for the Atacama Cosmology Telescope (ACT) and is observing the CMB sky with arcmin resolution over approximately 2000 square degrees. Its upgrade, Advanced ACTPol (AdvACT), will observe the CMB in five frequency bands and over a larger area of the sky. We describe the optimization and implementation of the ACTPol and AdvACT surveys. The selection of the observed fields is driven mainly by the science goals, that is, small angular scale CMB measurements, B-mode measurements and cross-correlation studies. For the ACTPol survey we have observed patches of the southern galactic sky with low galactic foreground emissions which were also chosen to maximize the overlap with several galaxy surveys to allow unique cross-correlation studies. A wider field in the northern galactic cap ensured significant additional overlap with the BOSS spectroscopic survey. The exact shapes and footprints of the fields were optimized to achieve uniform coverage and to obtain cross-linked maps by observing the fields with different scan directions. We have maximized the efficiency of the survey by implementing a close to 24-hour observing strategy, switching between daytime and nighttime observing plans and minimizing the telescope idle time. We describe the challenges represented by the survey optimization for the significantly wider area observed by AdvACT, which will observe roughly half of the low-foreground sky. The survey strategies described here may prove useful for planning future ground-based CMB surveys, such as the Simons Observatory and CMB Stage IV surveys.
Optimizing incomplete sample designs for item response model parameters
van der Linden, Willem J.
Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with
Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang
2016-01-01
For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.
Applying the Taguchi Method to River Water Pollution Remediation Strategy Optimization
Directory of Open Access Journals (Sweden)
Tsung-Ming Yang
2014-04-01
Full Text Available Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.
A Competitive and Experiential Assignment in Search Engine Optimization Strategy
Clarke, Theresa B.; Clarke, Irvine, III
2014-01-01
Despite an increase in ad spending and demand for employees with expertise in search engine optimization (SEO), methods for teaching this important marketing strategy have received little coverage in the literature. Using Bloom's cognitive goals hierarchy as a framework, this experiential assignment provides a process for educators who may be new…
Model-Based Optimization of Velocity Strategy for Lightweight Electric Racing Cars
Directory of Open Access Journals (Sweden)
Mirosław Targosz
2018-01-01
Full Text Available The article presents a method for optimizing driving strategies aimed at minimizing energy consumption while driving. The method was developed for the needs of an electric powered racing vehicle built for the purposes of the Shell Eco-marathon (SEM, the most famous and largest race of energy efficient vehicles. Model-based optimization was used to determine the driving strategy. The numerical model was elaborated in Simulink environment, which includes both the electric vehicle model and the environment, i.e., the race track as well as the vehicle environment and the atmospheric conditions. The vehicle model itself includes vehicle dynamic model, numerical model describing issues concerning resistance of rolling tire, resistance of the propulsion system, aerodynamic phenomena, model of the electric motor, and control system. For the purpose of identifying design and functional features of individual subassemblies and components, numerical and stand tests were carried out. The model itself was tested on the research tracks to tune the model and determine the calculation parameters. The evolutionary algorithms, which are available in the MATLAB Global Optimization Toolbox, were used for optimization. In the race conditions, the model was verified during SEM races in Rotterdam where the race vehicle scored the result consistent with the results of simulation calculations. In the following years, the experience gathered by the team gave us the vice Championship in the SEM 2016 in London.
Directory of Open Access Journals (Sweden)
Qijia Yao
2017-07-01
Full Text Available The optimal control of multibody spacecraft during the stretching process of solar arrays is investigated, and a hybrid optimization strategy based on Gauss pseudospectral method (GPM and direct shooting method (DSM is presented. First, the elastic deformation of flexible solar arrays was described approximately by the assumed mode method, and a dynamic model was established by the second Lagrangian equation. Then, the nonholonomic motion planning problem is transformed into a nonlinear programming problem by using GPM. By giving fewer LG points, initial values of the state variables and control variables were obtained. A serial optimization framework was adopted to obtain the approximate optimal solution from a feasible solution. Finally, the control variables were discretized at LG points, and the precise optimal control inputs were obtained by DSM. The optimal trajectory of the system can be obtained through numerical integration. Through numerical simulation, the stretching process of solar arrays is stable with no detours, and the control inputs match the various constraints of actual conditions. The results indicate that the method is effective with good robustness. Keywords: Motion planning, Multibody spacecraft, Optimal control, Gauss pseudospectral method, Direct shooting method
Optimized bolt tightening strategies for gasketed flanged pipe joints of different sizes
International Nuclear Information System (INIS)
Abid, Muhammad; Khan, Ayesha; Nash, David Hugh; Hussain, Masroor; Wajid, Hafiz Abdul
2016-01-01
Achieving a proper preload in the bolts of a gasketed bolted flanged pipe joint during joint assembly is considered important for its optimized performance. This paper presents results of detailed non-linear finite element analysis of an optimized bolt tightening strategy of different joint sizes for achieving proper preload close to the target stress values. Industrial guidelines are considered for applying recommended target stress values with TCM (torque control method) and SCM (stretch control method) using a customized optimization algorithm. Different joint components performance is observed and discussed in detail.
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The
Ad-Hoc vs. Standardized and Optimized Arthropod Diversity Sampling
Directory of Open Access Journals (Sweden)
Pedro Cardoso
2009-09-01
Full Text Available The use of standardized and optimized protocols has been recently advocated for different arthropod taxa instead of ad-hoc sampling or sampling with protocols defined on a case-by-case basis. We present a comparison of both sampling approaches applied for spiders in a natural area of Portugal. Tests were made to their efficiency, over-collection of common species, singletons proportions, species abundance distributions, average specimen size, average taxonomic distinctness and behavior of richness estimators. The standardized protocol revealed three main advantages: (1 higher efficiency; (2 more reliable estimations of true richness; and (3 meaningful comparisons between undersampled areas.
Directory of Open Access Journals (Sweden)
Jixiang Fan
2015-09-01
Full Text Available In this paper, a map-based optimal energy management strategy is proposed to improve the consumption economy of a plug-in parallel hybrid electric vehicle. In the design of the maps, which provide both the torque split between engine and motor and the gear shift, not only the current vehicle speed and power demand, but also the optimality based on the predicted trajectory of vehicle dynamics are considered. To seek the optimality, the equivalent consumption, which trades off the fuel and electricity usages, is chosen as the cost function. Moreover, in order to decrease the model errors in the process of optimization conducted in the discrete time domain, the variational integrator is employed to calculate the evolution of the vehicle dynamics. To evaluate the proposed energy management strategy, the simulation results performed on a professional GT-Suit simulator are demonstrated and the comparison to a real-time optimization method is also given to show the advantage of the proposed off-line optimization approach.
Optimal recharge and driving strategies for a battery-powered electric vehicle
Directory of Open Access Journals (Sweden)
Lee W. R.
1999-01-01
Full Text Available A major problem facing battery-powered electric vehicles is in their batteries: weight and charge capacity. Thus, a battery-powered electric vehicle only has a short driving range. To travel for a longer distance, the batteries are required to be recharged frequently. In this paper, we construct a model for a battery-powered electric vehicle, in which driving strategy is to be obtained such that the total travelling time between two locations is minimized. The problem is formulated as an optimization problem with switching times and speed as decision variables. This is an unconventional optimization problem. However, by using the control parametrization enhancing technique (CPET, it is shown that this unconventional optimization is equivalent to a conventional optimal parameter selection problem. Numerical examples are solved using the proposed method.
Development of an evaluation method for optimization of maintenance strategy in commercial plant
International Nuclear Information System (INIS)
Ito, Satoshi; Shiraishi, Natsuki; Yuki, Kazuhisa; Hashizume, Hidetoshi
2006-01-01
In this study, a new simulation method is developed for optimization of maintenance strategy in NPP as a multiple-objective optimization problem (MOP). The result of operation is evaluated as the average of the following three measures in 3,000 trials: Cost of Electricity (COE) as economic risk, Frequency of unplanned shutdown as plant reliability, and Unavailability of Regular Service System (RSS) and Engineering Safety Features (ESF) as safety measures. The following maintenance parameters are considered to evaluate several risk in plant operation by changing maintenance strategy: planned outage cycle, surveillance cycle, major inspection cycle, and surveillance cycle depending on the value of Fussel-Vesely importance measure. By using the Decision-Making method based on AHP, there are individual tendencies depending on individual decision-maker. Therefore this study could be useful for resolving the problem of maintenance optimization as a MOP. (author)
International Nuclear Information System (INIS)
2007-01-01
This part of ISO 18589 specifies the general requirements, based on ISO 11074 and ISO/IEC 17025, for all steps in the planning (desk study and area reconnaissance) of the sampling and the preparation of samples for testing. It includes the selection of the sampling strategy, the outline of the sampling plan, the presentation of general sampling methods and equipment, as well as the methodology of the pre-treatment of samples adapted to the measurements of the activity of radionuclides in soil. This part of ISO 18589 is addressed to the people responsible for determining the radioactivity present in soil for the purpose of radiation protection. It is applicable to soil from gardens, farmland, urban or industrial sites, as well as soil not affected by human activities. This part of ISO 18589 is applicable to all laboratories regardless of the number of personnel or the range of the testing performed. When a laboratory does not undertake one or more of the activities covered by this part of ISO 18589, such as planning, sampling or testing, the corresponding requirements do not apply. Information is provided on scope, normative references, terms and definitions and symbols, principle, sampling strategy, sampling plan, sampling process, pre-treatment of samples and recorded information. Five annexes inform about selection of the sampling strategy according to the objectives and the radiological characterization of the site and sampling areas, diagram of the evolution of the sample characteristics from the sampling site to the laboratory, example of sampling plan for a site divided in three sampling areas, example of a sampling record for a single/composite sample and example for a sample record for a soil profile with soil description. A bibliography is provided
Cost Effectiveness Analysis of Optimal Malaria Control Strategies in Kenya
Directory of Open Access Journals (Sweden)
Gabriel Otieno
2016-03-01
Full Text Available Malaria remains a leading cause of mortality and morbidity among the children under five and pregnant women in sub-Saharan Africa, but it is preventable and controllable provided current recommended interventions are properly implemented. Better utilization of malaria intervention strategies will ensure the gain for the value for money and producing health improvements in the most cost effective way. The purpose of the value for money drive is to develop a better understanding (and better articulation of costs and results so that more informed, evidence-based choices could be made. Cost effectiveness analysis is carried out to inform decision makers on how to determine where to allocate resources for malaria interventions. This study carries out cost effective analysis of one or all possible combinations of the optimal malaria control strategies (Insecticide Treated Bednets—ITNs, Treatment, Indoor Residual Spray—IRS and Intermittent Preventive Treatment for Pregnant Women—IPTp for the four different transmission settings in order to assess the extent to which the intervention strategies are beneficial and cost effective. For the four different transmission settings in Kenya the optimal solution for the 15 strategies and their associated effectiveness are computed. Cost-effective analysis using Incremental Cost Effectiveness Ratio (ICER was done after ranking the strategies in order of the increasing effectiveness (total infections averted. The findings shows that for the endemic regions the combination of ITNs, IRS, and IPTp was the most cost-effective of all the combined strategies developed in this study for malaria disease control and prevention; for the epidemic prone areas is the combination of the treatment and IRS; for seasonal areas is the use of ITNs plus treatment; and for the low risk areas is the use of treatment only. Malaria transmission in Kenya can be minimized through tailor-made intervention strategies for malaria control
International Nuclear Information System (INIS)
Tiwari, P; Xie, Y; Chen, Y; Deasy, J
2014-01-01
Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality
Bare-Bones Teaching-Learning-Based Optimization
Directory of Open Access Journals (Sweden)
Feng Zou
2014-01-01
Full Text Available Teaching-learning-based optimization (TLBO algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms.
Directory of Open Access Journals (Sweden)
Gener Tadeu Pereira
2013-10-01
Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.
Energy Optimal Control Strategy of PHEV Based on PMP Algorithm
Directory of Open Access Journals (Sweden)
Tiezhou Wu
2017-01-01
Full Text Available Under the global voice of “energy saving” and the current boom in the development of energy storage technology at home and abroad, energy optimal control of the whole hybrid electric vehicle power system, as one of the core technologies of electric vehicles, is bound to become a hot target of “clean energy” vehicle development and research. This paper considers the constraints to the performance of energy storage system in Parallel Hybrid Electric Vehicle (PHEV, from which lithium-ion battery frequently charges/discharges, PHEV largely consumes energy of fuel, and their are difficulty in energy recovery and other issues in a single cycle; the research uses lithium-ion battery combined with super-capacitor (SC, which is hybrid energy storage system (Li-SC HESS, working together with internal combustion engine (ICE to drive PHEV. Combined with PSO-PI controller and Li-SC HESS internal power limited management approach, the research proposes the PHEV energy optimal control strategy. It is based on revised Pontryagin’s minimum principle (PMP algorithm, which establishes the PHEV vehicle simulation model through ADVISOR software and verifies the effectiveness and feasibility. Finally, the results show that the energy optimization control strategy can improve the instantaneity of tracking PHEV minimum fuel consumption track, implement energy saving, and prolong the life of lithium-ion batteries and thereby can improve hybrid energy storage system performance.
International Nuclear Information System (INIS)
Hong, Taehoon; Koo, Choongwan; Kim, Hyunjoong; Seon Park, Hyo
2014-01-01
The number of multi-family housing complexes (MFHCs) over 15 yr old in South Korea is expected to exceed 5 million by 2015. Accordingly, the demand for energy retrofit in the deteriorating MFHCs is rapidly increasing. This study aimed to develop a decision support model for establishing the optimal energy retrofit strategy for existing MFHCs. It can provide clear criteria for establishing the carbon emissions reduction target (CERT) and allow efficient budget allocation for conducting the energy retrofit. The CERT for “S” MFHC, one of MFHCs located in Seoul, as a case study, was set at 23.0% (electricity) and 27.9% (gas energy). In the economic and environmental assessment, it was determined that scenario #12 was the optimal scenario (ranked second with regard to NPV 40 (net present value at year 40) and third with regard to SIR 40 (saving to investment ratio at year 40). The proposed model could be useful for owners, construction managers, or policymakers in charge of establishing energy retrofit strategy for existing MFHCs. It could allow contractors in a competitive bidding process to rationally establish the CERT and select the optimal energy retrofit strategy. It can be also applied to any other country or sector in a global environment. - Highlights: • The proposed model was developed to establish the optimal energy retrofit strategy. • Advanced case-based reasoning was applied to establish the community-based CERT. • Energy simulation was conducted to analyze the effects of energy retrofit strategy. • The optimal strategy can be finally selected based on the LCC and LCCO 2 analysis. • It could be extended to any other country or sector in the global environment
Convexity of Ruin Probability and Optimal Dividend Strategies for a General Lévy Process
Directory of Open Access Journals (Sweden)
Chuancun Yin
2015-01-01
Full Text Available We consider the optimal dividends problem for a company whose cash reserves follow a general Lévy process with certain positive jumps and arbitrary negative jumps. The objective is to find a policy which maximizes the expected discounted dividends until the time of ruin. Under appropriate conditions, we use some recent results in the theory of potential analysis of subordinators to obtain the convexity properties of probability of ruin. We present conditions under which the optimal dividend strategy, among all admissible ones, takes the form of a barrier strategy.
Convexity of Ruin Probability and Optimal Dividend Strategies for a General Lévy Process
Yuen, Kam Chuen; Shen, Ying
2015-01-01
We consider the optimal dividends problem for a company whose cash reserves follow a general Lévy process with certain positive jumps and arbitrary negative jumps. The objective is to find a policy which maximizes the expected discounted dividends until the time of ruin. Under appropriate conditions, we use some recent results in the theory of potential analysis of subordinators to obtain the convexity properties of probability of ruin. We present conditions under which the optimal dividend strategy, among all admissible ones, takes the form of a barrier strategy. PMID:26351655
Optimal orientation in flows : Providing a benchmark for animal movement strategies
McLaren, James D.; Shamoun-Baranes, Judy; Dokter, Adriaan M.; Klaassen, Raymond H. G.; Bouten, Willem
2014-01-01
Animal movements in air and water can be strongly affected by experienced flow. While various flow-orientation strategies have been proposed and observed, their performance in variable flow conditions remains unclear. We apply control theory to establish a benchmark for time-minimizing (optimal)
Optimal CCD readout by digital correlated double sampling
Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.
2016-01-01
Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.
Social Optimization and Pricing Policy in Cognitive Radio Networks with an Energy Saving Strategy
Directory of Open Access Journals (Sweden)
Shunfu Jin
2016-01-01
Full Text Available The rapid growth of wireless application results in an increase in demand for spectrum resource and communication energy. In this paper, we firstly introduce a novel energy saving strategy in cognitive radio networks (CRNs and then propose an appropriate pricing policy for secondary user (SU packets. We analyze the behavior of data packets in a discrete-time single-server priority queue under multiple-vacation discipline. With the help of a Quasi-Birth-Death (QBD process model, we obtain the joint distribution for the number of SU packets and the state of base station (BS via the Matrix-Geometric Solution method. We assess the average latency of SU packets and the energy saving ratio of system. According to a natural reward-cost structure, we study the individually optimal behavior and the socially optimal behavior of the energy saving strategy and use an optimization algorithm based on standard particle swarm optimization (SPSO method to search the socially optimal arrival rate of SU packets. By comparing the individually optimal behavior and the socially optimal behavior, we impose an appropriate admission fee to SU packets. Finally, we present numerical results to show the impacts of system parameters on the system performance and the pricing policy.
Directory of Open Access Journals (Sweden)
Yang Sun
2018-01-01
Full Text Available Using Pareto optimization in Multi-Objective Reinforcement Learning (MORL leads to better learning results for network defense games. This is particularly useful for network security agents, who must often balance several goals when choosing what action to take in defense of a network. If the defender knows his preferred reward distribution, the advantages of Pareto optimization can be retained by using a scalarization algorithm prior to the implementation of the MORL. In this paper, we simulate a network defense scenario by creating a multi-objective zero-sum game and using Pareto optimization and MORL to determine optimal solutions and compare those solutions to different scalarization approaches. We build a Pareto Defense Strategy Selection Simulator (PDSSS system for assisting network administrators on decision-making, specifically, on defense strategy selection, and the experiment results show that the Satisficing Trade-Off Method (STOM scalarization approach performs better than linear scalarization or GUESS method. The results of this paper can aid network security agents attempting to find an optimal defense policy for network security games.
A normative inference approach for optimal sample sizes in decisions from experience
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
“Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720
Two-phase strategy of controlling motor coordination determined by task performance optimality.
Shimansky, Yury P; Rand, Miya K
2013-02-01
A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model's utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.
Yin, Chuancun; Wang, Chunwei
2009-11-01
The optimal dividend problem proposed in de Finetti [1] is to find the dividend-payment strategy that maximizes the expected discounted value of dividends which are paid to the shareholders until the company is ruined. Avram et al. [9] studied the case when the risk process is modelled by a general spectrally negative Lévy process and Loeffen [10] gave sufficient conditions under which the optimal strategy is of the barrier type. Recently Kyprianou et al. [11] strengthened the result of Loeffen [10] which established a larger class of Lévy processes for which the barrier strategy is optimal among all admissible ones. In this paper we use an analytical argument to re-investigate the optimality of barrier dividend strategies considered in the three recent papers.
Optimization of cooling strategy and seeding by FBRM analysis of batch crystallization
Zhang, Dejiang; Liu, Lande; Xu, Shijie; Du, Shichao; Dong, Weibing; Gong, Junbo
2018-03-01
A method is presented for optimizing the cooling strategy and seed loading simultaneously. Focused beam reflectance measurement (FBRM) was used to determine the approximating optimal cooling profile. Using these results in conjunction with constant growth rate assumption, modified Mullin-Nyvlt trajectory could be calculated. This trajectory could suppress secondary nucleation and has the potential to control product's polymorph distribution. Comparing with linear and two step cooling, modified Mullin-Nyvlt trajectory have a larger size distribution and a better morphology. Based on the calculating results, the optimized seed loading policy was also developed. This policy could be useful for guiding the batch crystallization process.
A two-level strategy to realize life-cycle production optimization in an operational setting
Essen, van G.M.; Hof, Van den P.M.J.; Jansen, J.D.
2012-01-01
We present a two-level strategy to improve robustness against uncertainty and model errors in life-cycle flooding optimization. At the upper level, a physics-based large-scale reservoir model is used to determine optimal life-cycle injection and production profiles. At the lower level these profiles
A two-level strategy to realize life-cycle production optimization in an operational setting
Essen, van G.M.; Hof, Van den P.M.J.; Jansen, J.D.
2013-01-01
We present a two-level strategy to improve robustness against uncertainty and model errors in life-cycle flooding optimization. At the upper level, a physics-based large-scale reservoir model is used to determine optimal life-cycle injection and production profiles. At the lower level these profiles
International Nuclear Information System (INIS)
Porteus, E.
1982-01-01
The study of infinite-horizon nonstationary dynamic programs using the operator approach is continued. The point of view here differs slightly from that taken by others, in that Denardo's local income function is not used as a starting point. Infinite-horizon values are defined as limits of finite-horizon values, as the horizons get long. Two important conditions of an earlier paper are weakened, yet the optimality equations, the optimality criterion, and the existence of optimal ''structured'' strategies are still obtained
A Dynamic Optimization Strategy for the Operation of Large Scale Seawater Reverses Osmosis System
Directory of Open Access Journals (Sweden)
Aipeng Jiang
2014-01-01
Full Text Available In this work, an efficient strategy was proposed for efficient solution of the dynamic model of SWRO system. Since the dynamic model is formulated by a set of differential-algebraic equations, simultaneous strategies based on collocations on finite element were used to transform the DAOP into large scale nonlinear programming problem named Opt2. Then, simulation of RO process and storage tanks was carried element by element and step by step with fixed control variables. All the obtained values of these variables then were used as the initial value for the optimal solution of SWRO system. Finally, in order to accelerate the computing efficiency and at the same time to keep enough accuracy for the solution of Opt2, a simple but efficient finite element refinement rule was used to reduce the scale of Opt2. The proposed strategy was applied to a large scale SWRO system with 8 RO plants and 4 storage tanks as case study. Computing result shows that the proposed strategy is quite effective for optimal operation of the large scale SWRO system; the optimal problem can be successfully solved within decades of iterations and several minutes when load and other operating parameters fluctuate.
Hu, Wang; Yen, Gary G; Luo, Guangchun
2017-06-01
It is a daunting challenge to balance the convergence and diversity of an approximate Pareto front in a many-objective optimization evolutionary algorithm. A novel algorithm, named many-objective particle swarm optimization with the two-stage strategy and parallel cell coordinate system (PCCS), is proposed in this paper to improve the comprehensive performance in terms of the convergence and diversity. In the proposed two-stage strategy, the convergence and diversity are separately emphasized at different stages by a single-objective optimizer and a many-objective optimizer, respectively. A PCCS is exploited to manage the diversity, such as maintaining a diverse archive, identifying the dominance resistant solutions, and selecting the diversified solutions. In addition, a leader group is used for selecting the global best solutions to balance the exploitation and exploration of a population. The experimental results illustrate that the proposed algorithm outperforms six chosen state-of-the-art designs in terms of the inverted generational distance and hypervolume over the DTLZ test suite.
Adiabatic quantum games and phase-transition-like behavior between optimal strategies
de Ponte, M. A.; Santos, Alan C.
2018-06-01
In this paper we propose a game of a single qubit whose strategies can be implemented adiabatically. In addition, we show how to implement the strategies of a quantum game through controlled adiabatic evolutions, where we analyze the payment of a quantum player for various situations of interest: (1) when the players receive distinct payments, (2) when the initial state is an arbitrary superposition, and (3) when the device that implements the strategy is inefficient. Through a graphical analysis, it is possible to notice that the curves that represent the gains of the players present a behavior similar to the curves that give rise to a phase transition in thermodynamics. These transitions are associated with optimal strategy changes and occur in the absence of entanglement and interaction between the players.
Mars Sample Return - Launch and Detection Strategies for Orbital Rendezvous
Woolley, Ryan C.; Mattingly, Richard L.; Riedel, Joseph E.; Sturm, Erick J.
2011-01-01
This study sets forth conceptual mission design strategies for the ascent and rendezvous phase of the proposed NASA/ESA joint Mars Sample Return Campaign. The current notional mission architecture calls for the launch of an acquisition/cache rover in 2018, an orbiter with an Earth return vehicle in 2022, and a fetch rover and ascent vehicle in 2024. Strategies are presented to launch the sample into a coplanar orbit with the Orbiter which facilitate robust optical detection, orbit determination, and rendezvous. Repeating ground track orbits exist at 457 and 572 km which provide multiple launch opportunities with similar geometries for detection and rendezvous.
Mars Sample Return: Launch and Detection Strategies for Orbital Rendezvous
Woolley, Ryan C.; Mattingly, Richard L.; Riedel, Joseph E.; Sturm, Erick J.
2011-01-01
This study sets forth conceptual mission design strategies for the ascent and rendezvous phase of the proposed NASA/ESA joint Mars Sample Return Campaign. The current notional mission architecture calls for the launch of an acquisition/ caching rover in 2018, an Earth return orbiter in 2022, and a fetch rover with ascent vehicle in 2024. Strategies are presented to launch the sample into a nearly coplanar orbit with the Orbiter which would facilitate robust optical detection, orbit determination, and rendezvous. Repeating ground track orbits existat 457 and 572 km which would provide multiple launch opportunities with similar geometries for detection and rendezvous.
Energy Technology Data Exchange (ETDEWEB)
Delprat, S.; Guerra, T.M. [Universite de Valenciennes et du Hainaut-Cambresis, LAMIH UMR CNRS 8530, 59 - Valenciennes (France); Rimaux, J. [PSA Peugeot Citroen, DRIA/SARA/EEES, 78 - Velizy Villacoublay (France); Paganelli, G. [Center for Automotive Research, Ohio (United States)
2002-07-01
Control strategies are algorithms that calculate the power repartition between the engine and the motor of an hybrid vehicle in order to minimize the fuel consumption and/or emissions. Some algorithms are devoted to real time application whereas others are designed for global optimization in stimulation. The last ones provide solutions which can be used to evaluate the performances of a given hybrid vehicle or a given real time control strategy. The control strategy problem is firstly written into the form of an optimization under constraints problem. A solution based on optimal control is proposed. Results are given for the European Normalized Cycle and a parallel single shaft hybrid vehicle built at the LAMIH (France). (authors)
Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H
2015-12-01
Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.
Brus, D.J.; Gruijter, de J.J.
1997-01-01
Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based
Particle Swarm Optimization With Interswarm Interactive Learning Strategy.
Qin, Quande; Cheng, Shi; Zhang, Qingyu; Li, Li; Shi, Yuhui
2016-10-01
The learning strategy in the canonical particle swarm optimization (PSO) algorithm is often blamed for being the primary reason for loss of diversity. Population diversity maintenance is crucial for preventing particles from being stuck into local optima. In this paper, we present an improved PSO algorithm with an interswarm interactive learning strategy (IILPSO) by overcoming the drawbacks of the canonical PSO algorithm's learning strategy. IILPSO is inspired by the phenomenon in human society that the interactive learning behavior takes place among different groups. Particles in IILPSO are divided into two swarms. The interswarm interactive learning (IIL) behavior is triggered when the best particle's fitness value of both the swarms does not improve for a certain number of iterations. According to the best particle's fitness value of each swarm, the softmax method and roulette method are used to determine the roles of the two swarms as the learning swarm and the learned swarm. In addition, the velocity mutation operator and global best vibration strategy are used to improve the algorithm's global search capability. The IIL strategy is applied to PSO with global star and local ring structures, which are termed as IILPSO-G and IILPSO-L algorithm, respectively. Numerical experiments are conducted to compare the proposed algorithms with eight popular PSO variants. From the experimental results, IILPSO demonstrates the good performance in terms of solution accuracy, convergence speed, and reliability. Finally, the variations of the population diversity in the entire search process provide an explanation why IILPSO performs effectively.
Evolution strategies for robust optimization
Kruisselbrink, Johannes Willem
2012-01-01
Real-world (black-box) optimization problems often involve various types of uncertainties and noise emerging in different parts of the optimization problem. When this is not accounted for, optimization may fail or may yield solutions that are optimal in the classical strict notion of optimality, but
Energy Technology Data Exchange (ETDEWEB)
Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)
2015-04-15
Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.
Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.
2017-03-01
General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.
Optimization of the two-sample rank Neyman-Pearson detector
Akimov, P. S.; Barashkov, V. M.
1984-10-01
The development of optimal algorithms concerned with rank considerations in the case of finite sample sizes involves considerable mathematical difficulties. The present investigation provides results related to the design and the analysis of an optimal rank detector based on a utilization of the Neyman-Pearson criteria. The detection of a signal in the presence of background noise is considered, taking into account n observations (readings) x1, x2, ... xn in the experimental communications channel. The computation of the value of the rank of an observation is calculated on the basis of relations between x and the variable y, representing interference. Attention is given to conditions in the absence of a signal, the probability of the detection of an arriving signal, details regarding the utilization of the Neyman-Pearson criteria, the scheme of an optimal rank, multichannel, incoherent detector, and an analysis of the detector.
Asymptotic estimation of reactor fueling optimal strategy
International Nuclear Information System (INIS)
Simonov, V.D.
1985-01-01
The problem of improving the technical-economic factors of operating. and designed nuclear power plant blocks by developino. internal fuel cycle strategy (reactor fueling regime optimization), taking into account energy system structural peculiarities altogether, is considered. It is shown, that in search of asymptotic solutions of reactor fueling planning tasks the model of fuel energy potential (FEP) is the most ssuitable and effective. FEP represents energy which may be produced from the fuel in a reactor with real dimensions and power, but with hypothetical fresh fuel supply, regime, providing smilar burnup of all the fuel, passing through the reactor, and continuous overloading of infinitely small fuel portion under fule power, and infinitely rapid mixing of fuel in the reactor core volume. Reactor fuel run with such a standard fuel cycle may serve as FEP quantitative measure. Assessment results of optimal WWER-440 reactor fresh fuel supply periodicity are given as an example. The conclusion is drawn that with fuel enrichment x=3.3% the run which is 300 days, is economically justified, taking into account that the cost of one energy unit production is > 3 cop/KW/h
Constrained optimisation of spatial sampling : a geostatistical approach
Groenigen, van J.W.
1999-01-01
This thesis aims at the development of optimal sampling strategies for geostatistical studies. Special emphasis is on the optimal use of ancillary data, such as co-related imagery, preliminary observations and historic knowledge. Although the object of all studies
The Optimal Strategy to Research Pension Funds in China Based on the Loss Function
Gao, Jian-wei; Guo, Hong-zhen; Ye, Yan-cheng
2007-01-01
Based on the theory of actuarial present value, a pension fund investment goal can be formulated as an objective function. The mean-variance model is extended by defining the objective loss function. Furthermore, using the theory of stochastic optimal control, an optimal investment model is established under the minimum expectation of loss function. In the light of the Hamilton-Jacobi-Bellman (HJB) equation, the analytic solution of the optimal investment strategy problem is derived.
Risk-Averse Suppliers’ Optimal Pricing Strategies in a Two-Stage Supply Chain
Directory of Open Access Journals (Sweden)
Rui Shen
2013-01-01
Full Text Available Risk-averse suppliers’ optimal pricing strategies in two-stage supply chains under competitive environment are discussed. The suppliers in this paper focus more on losses as compared to profits, and they care their long-term relationship with their customers. We introduce for the suppliers a loss function, which covers both current loss and future loss. The optimal wholesale price is solved under situations of risk neutral, risk averse, and a combination of minimizing loss and controlling risk, respectively. Besides, some properties of and relations among these optimal wholesale prices are given as well. A numerical example is given to illustrate the performance of the proposed method.
Szramka-Pawlak, B; Dańczak-Pazdrowska, A; Rzepa, T; Szewczyk, A; Sadowska-Przytocka, A; Żaba, R
2013-01-01
The clinical course of localized scleroderma may consist of bodily deformations, and bodily functions may also be affected. Additionally, the secondary lesions, such as discoloration, contractures, and atrophy, are unlikely to regress. The aforementioned symptoms and functional disturbances may decrease one's quality of life (QoL). Although much has been mentioned in the medical literature regarding QoL in persons suffering from dermatologic diseases, no data specifically describing patients with localized scleroderma exist. The aim of the study was to explore QoL in localized scleroderma patients and to examine their coping strategies in regard to optimism and QoL. The study included 41 patients with localized scleroderma. QoL was evaluated using the SKINDEX questionnaire, and levels of dispositional optimism were assessed using the Life Orientation Test-Revised. In addition, individual coping strategy was determined using the Mini-MAC scale and physical condition was assessed using the Localized Scleroderma Severity Index. The mean QoL score amounted to 51.10 points, with mean scores for individual components as follows: symptoms = 13.49 points, emotions = 21.29 points, and functioning = 16.32 points. A relationship was detected between QoL and the level of dispositional optimism as well as with coping strategies known as anxious preoccupation and helplessness-hopelessness. Higher levels of optimism predicted a higher general QoL. In turn, greater intensity of anxious preoccupied and helpless-hopeless behaviors predicted a lower QoL. Based on these results, it may be stated that localized scleroderma patients have a relatively high QoL, which is accompanied by optimism as well as a lower frequency of behaviors typical of emotion-focused coping strategies.
Development of a codon optimization strategy using the efor RED reporter gene as a test case
Yip, Chee-Hoo; Yarkoni, Orr; Ajioka, James; Wan, Kiew-Lian; Nathan, Sheila
2018-04-01
Synthetic biology is a platform that enables high-level synthesis of useful products such as pharmaceutically related drugs, bioplastics and green fuels from synthetic DNA constructs. Large-scale expression of these products can be achieved in an industrial compliant host such as Escherichia coli. To maximise the production of recombinant proteins in a heterologous host, the genes of interest are usually codon optimized based on the codon usage of the host. However, the bioinformatics freeware available for standard codon optimization might not be ideal in determining the best sequence for the synthesis of synthetic DNA. Synthesis of incorrect sequences can prove to be a costly error and to avoid this, a codon optimization strategy was developed based on the E. coli codon usage using the efor RED reporter gene as a test case. This strategy replaces codons encoding for serine, leucine, proline and threonine with the most frequently used codons in E. coli. Furthermore, codons encoding for valine and glycine are substituted with the second highly used codons in E. coli. Both the optimized and original efor RED genes were ligated to the pJS209 plasmid backbone using Gibson Assembly and the recombinant DNAs were transformed into E. coli E. cloni 10G strain. The fluorescence intensity per cell density of the optimized sequence was improved by 20% compared to the original sequence. Hence, the developed codon optimization strategy is proposed when designing an optimal sequence for heterologous protein production in E. coli.
Study on the Optimal Charging Strategy for Lithium-Ion Batteries Used in Electric Vehicles
Directory of Open Access Journals (Sweden)
Shuo Zhang
2014-10-01
Full Text Available The charging method of lithium-ion batteries used in electric vehicles (EVs significantly affects its commercial application. This paper aims to make three contributions to the existing literature. (1 In order to achieve an efficient charging strategy for lithium-ion batteries with shorter charging time and lower charring loss, the trade-off problem between charging loss and charging time has been analyzed in details through the dynamic programing (DP optimization algorithm; (2 To reduce the computation time consumed during the optimization process, we have proposed a database based optimization approach. After off-line calculation, the simulation results can be applied to on-line charge; (3 The novel database-based DP method is proposed and the simulation results illustrate that this method can effectively find the suboptimal charging strategies under a certain balance between the charging loss and charging time.
McCarthy, David T; Zhang, Kefeng; Westerlund, Camilla; Viklander, Maria; Bertrand-Krajewski, Jean-Luc; Fletcher, Tim D; Deletic, Ana
2018-02-01
The estimation of stormwater pollutant concentrations is a primary requirement of integrated urban water management. In order to determine effective sampling strategies for estimating pollutant concentrations, data from extensive field measurements at seven different catchments was used. At all sites, 1-min resolution continuous flow measurements, as well as flow-weighted samples, were taken and analysed for total suspend solids (TSS), total nitrogen (TN) and Escherichia coli (E. coli). For each of these parameters, the data was used to calculate the Event Mean Concentrations (EMCs) for each event. The measured Site Mean Concentrations (SMCs) were taken as the volume-weighted average of these EMCs for each parameter, at each site. 17 different sampling strategies, including random and fixed strategies were tested to estimate SMCs, which were compared with the measured SMCs. The ratios of estimated/measured SMCs were further analysed to determine the most effective sampling strategies. Results indicate that the random sampling strategies were the most promising method in reproducing SMCs for TSS and TN, while some fixed sampling strategies were better for estimating the SMC of E. coli. The differences in taking one, two or three random samples were small (up to 20% for TSS, and 10% for TN and E. coli), indicating that there is little benefit in investing in collection of more than one sample per event if attempting to estimate the SMC through monitoring of multiple events. It was estimated that an average of 27 events across the studied catchments are needed for characterising SMCs of TSS with a 90% confidence interval (CI) width of 1.0, followed by E.coli (average 12 events) and TN (average 11 events). The coefficient of variation of pollutant concentrations was linearly and significantly correlated to the 90% confidence interval ratio of the estimated/measured SMCs (R 2 = 0.49; P sampling frequency needed to accurately estimate SMCs of pollutants. Crown
International Nuclear Information System (INIS)
Mahdad, Belkacem; Srairi, K.
2015-01-01
Highlights: • A generalized optimal security power system planning strategy for blackout risk prevention is proposed. • A Grey Wolf Optimizer dynamically coordinated with Pattern Search algorithm is proposed. • A useful optimized database dynamically generated considering margin loading stability under severe faults. • The robustness and feasibility of the proposed strategy is validated in the standard IEEE 30 Bus system. • The proposed planning strategy will be useful for power system protection coordination and control. - Abstract: Developing a flexible and reliable power system planning strategy under critical situations is of great importance to experts and industrials to minimize the probability of blackouts occurrence. This paper introduces the first stage of this practical strategy by the application of Grey Wolf Optimizer coordinated with pattern search algorithm for solving the security smart grid power system management under critical situations. The main objective of this proposed planning strategy is to prevent the practical power system against blackout due to the apparition of faults in generating units or important transmission lines. At the first stage the system is pushed to its margin stability limit, the critical loads shedding are selected using voltage stability index. In the second stage the generator control variables, the reactive power of shunt and dynamic compensators are adjusted in coordination with minimization the active and reactive power at critical loads to maintain the system at security state to ensure service continuity. The feasibility and efficiency of the proposed strategy is applied to IEEE 30-Bus test system. Results are promising and prove the practical efficiency of the proposed strategy to ensure system security under critical situations
Focardi, S; Rizzotto, M
1999-09-01
Predator-prey relationships involving rabbits and hares are widely studied at a long-term population level, while the short-term ethological interactions between one predator and one prey are less well documented. We use a physiologically-based model of hare behavior, developed in the framework of artificial intelligence studies, to analyse its sophisticated anti-predatory behavior. The hares use to stand to the fox in order to inform it that its potential prey is alerted. The behavior of the hare is characterized by specific standing and flushing distances. We show that both hare survival probability and body condition depend on habitat cover, as well as on the ability of the predator to approach-undetected-a prey. We study two anti-predatory strategies, one based on the maximization of the survival probability and the other on the maximization of the body conditions of the hare. Despite the fact that the two strategies are not independent, they are characterized by quite different behavioral patterns. Field estimates of flushing and standing distances are consistent with survival maximization. There exists an optimal anti-predatory strategy, characterized by a flushing distance of 20 m and a standing distance of 30 m, which is optimal in a large set of environmental conditions with a sharp fitness advantage with respect to suboptimal strategies. These results improve our understanding of the anti-predatory behavior of the hare and lend credibility to the optimality approach in the behavioral analysis, showing that even for complex organisms, characterized by a large network of internal constraints and feedback, it is possible to identify simple optimal strategies with a large potential for selection.
Energy Technology Data Exchange (ETDEWEB)
Ji, Aimin; Yin, Xu; Yuan, Minghai [Hohai University, Changzhou (China)
2015-09-15
There are two problems in Collaborative optimization (CO): (1) the local optima arising from the selection of an inappropriate initial point; (2) the low efficiency and accuracy root in inappropriate relaxation factors. To solve these problems, we first develop the Latin hypercube design (LHD) to determine an initial point of optimization, and then use the non-linear programming by quadratic Lagrangian (NLPQL) to search for the global solution. The effectiveness of the initial point selection strategy is verified by three benchmark functions with some dimensions and different complexities. Then we propose the Adaptive relaxation collaborative optimization (ARCO) algorithm to solve the inconsistency between the system level and the disciplines level, and in this method, the relaxation factors are determined according to the three separated stages of CO respectively. The performance of the ARCO algorithm is compared with the standard collaborative algorithm and the constant relaxation collaborative algorithm with a typical numerical example, which indicates that the ARCO algorithm is more efficient and accurate. Finally, we propose a Hybrid collaborative optimization (HCO) approach, which integrates the selection strategy of initial point with the ARCO algorithm. The results show that HCO can achieve the global optimal solution without the initial value and it also has advantages in convergence, accuracy and robustness. Therefore, the proposed HCO approach can solve the CO problems with applications in the spindle and the speed reducer.
International Nuclear Information System (INIS)
Ji, Aimin; Yin, Xu; Yuan, Minghai
2015-01-01
There are two problems in Collaborative optimization (CO): (1) the local optima arising from the selection of an inappropriate initial point; (2) the low efficiency and accuracy root in inappropriate relaxation factors. To solve these problems, we first develop the Latin hypercube design (LHD) to determine an initial point of optimization, and then use the non-linear programming by quadratic Lagrangian (NLPQL) to search for the global solution. The effectiveness of the initial point selection strategy is verified by three benchmark functions with some dimensions and different complexities. Then we propose the Adaptive relaxation collaborative optimization (ARCO) algorithm to solve the inconsistency between the system level and the disciplines level, and in this method, the relaxation factors are determined according to the three separated stages of CO respectively. The performance of the ARCO algorithm is compared with the standard collaborative algorithm and the constant relaxation collaborative algorithm with a typical numerical example, which indicates that the ARCO algorithm is more efficient and accurate. Finally, we propose a Hybrid collaborative optimization (HCO) approach, which integrates the selection strategy of initial point with the ARCO algorithm. The results show that HCO can achieve the global optimal solution without the initial value and it also has advantages in convergence, accuracy and robustness. Therefore, the proposed HCO approach can solve the CO problems with applications in the spindle and the speed reducer
Implementation of Evolution Strategies (ES Algorithm to Optimization Lovebird Feed Composition
Directory of Open Access Journals (Sweden)
Agung Mustika Rizki
2017-05-01
Full Text Available Lovebird current society, especially popular among bird lovers. Some people began to try to develop the cultivation of these birds. In the cultivation process to consider the composition of feed to produce a quality bird. Determining the feed is not easy because it must consider the cost and need for vitamin Lovebird. This problem can be solved by the algorithm Evolution Strategies (ES. Based on test results obtained optimal fitness value of 0.3125 using a population size of 100 and optimal fitness value of 0.3267 in the generation of 1400.
Optimal Corridor Selection for a Road Space Management Strategy: Methodology and Tool
Directory of Open Access Journals (Sweden)
Sushant Sharma
2017-01-01
Full Text Available Nationwide, there is a growing realization that there are valuable benefits to using the existing roadway facilities to their full potential rather than expanding capacity in a traditional way. Currently, state DOTs are looking for cost-effective transportation solutions to mitigate the growing congestion and increasing funding gaps. Innovative road space management strategies like narrowing of multiple lanes (three or more and shoulder width to add a lane enhance the utilization while eliminating the costs associated with constructing new lanes. Although this strategy (among many generally leads to better mobility, identifying optimal corridors is a challenge and may affect the benefits. Further, there is a likelihood that added capacity may provide localized benefits, at the expense of system level performance measures (travel time and crashes because of the relocation of traffic operational bottlenecks. This paper develops a novel transportation programming and investment decision method to identify optimal corridors for adding capacity in the network by leveraging lane widths. The methodology explicitly takes into consideration the system level benefits and safety. The programming compares two conflicting objectives of system travel time and safety benefits to find an optimal solution.
Optimal Scanning Bandwidth Strategy Incorporating Uncertainty about Adversary’s Characteristics
Directory of Open Access Journals (Sweden)
Andrey Garnaev
2014-12-01
Full Text Available In this paper, we investigate the problem of designing a spectrum scanning strategy to detect an intelligent Invader who wants to utilize spectrum undetected for his/her unapproved purposes. To deal with this problem we model the situation as two games, between a Scanner and an Invader, and solve them sequentially. The first game is formulated to design the optimal (in maxmin sense scanning algorithm, while the second one allows one to find the optimal values of the parameters for the algorithm depending on the parameters of the network. These games provide solutions for two dilemmas that the rivals face. The Invader’s dilemma consists of the following: the more bandwidth the Invader attempts to use leads to a larger payoff if he is not detected, but at the same time also increases the probability of being detected and thus fined. Similarly, the Scanner faces a dilemma: the wider the bandwidth scanned, the higher the probability of detecting the Invader, but at the expense of increasing the cost of building the scanning system. The equilibrium strategies are found explicitly and reveal interesting properties. In particular, we have found a discontinuous dependence of the equilibrium strategies on the network parameters, fine and the type of the Invader’s award. This discontinuity of the fine means that the network provider has to take into account a human/social factor since some threshold values of fine could be very sensible for the Invader, while in other situations simply increasing the fine has a minimal deterrence impact. Also we show how incomplete information about the Invader’s technical characteristics and reward (e.g. motivated by using different type of application, say, video-streaming or downloading files can be incorporated into the scanning strategy to increase its efficiency.
The Optimal Strategy to Research Pension Funds in China Based on the Loss Function
Directory of Open Access Journals (Sweden)
Jian-wei Gao
2007-10-01
Full Text Available Based on the theory of actuarial present value, a pension fund investment goal can be formulated as an objective function. The mean-variance model is extended by defining the objective loss function. Furthermore, using the theory of stochastic optimal control, an optimal investment model is established under the minimum expectation of loss function. In the light of the Hamilton-Jacobi-Bellman (HJB equation, the analytic solution of the optimal investment strategy problem is derived.
Li, Zejing
2012-01-01
This dissertation is mainly devoted to the research of two problems - the continuous-time portfolio optimization in different Wishart models and the effects of discrete rebalancing on portfolio wealth distribution and optimal portfolio strategy.
Directory of Open Access Journals (Sweden)
Elisa Robotti
2017-01-01
Full Text Available Head space (HS solid phase microextraction (SPME followed by gas chromatography with mass spectrometry detection (GC-MS is the most widespread technique to study the volatile profile of honey samples. In this paper, the experimental SPME conditions were optimized by a multivariate strategy. Both sensitivity and repeatability were optimized by experimental design techniques considering three factors: extraction temperature (from 50°C to 70°C, time of exposition of the fiber (from 20 min to 60 min, and amount of salt added (from 0 to 27.50%. Each experiment was evaluated by Principal Component Analysis (PCA that allows to take into consideration all the analytes at the same time, preserving the information about their different characteristics. Optimal extraction conditions were identified independently for signal intensity (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w and repeatability (extraction temperature: 50°C; extraction time: 60 min; salt percentage: 27.50% w/w and a final global compromise (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w was also reached. Considerations about the choice of the best internal standards were also drawn. The whole optimized procedure was than applied to the analysis of a multiflower honey sample and more than 100 compounds were identified.
VI International Workshop on Nature Inspired Cooperative Strategies for Optimization
Otero, Fernando; Masegosa, Antonio
2014-01-01
Biological and other natural processes have always been a source of inspiration for computer science and information technology. Many emerging problem solving techniques integrate advanced evolution and cooperation strategies, encompassing a range of spatio-temporal scales for visionary conceptualization of evolutionary computation. This book is a collection of research works presented in the VI International Workshop on Nature Inspired Cooperative Strategies for Optimization (NICSO) held in Canterbury, UK. Previous editions of NICSO were held in Granada, Spain (2006 & 2010), Acireale, Italy (2007), Tenerife, Spain (2008), and Cluj-Napoca, Romania (2011). NICSO 2013 and this book provides a place where state-of-the-art research, latest ideas and emerging areas of nature inspired cooperative strategies for problem solving are vigorously discussed and exchanged among the scientific community. The breadth and variety of articles in this book report on nature inspired methods and applications such as Swarm In...
Bio-Mimic Optimization Strategies in Wireless Sensor Networks: A Survey
Adnan, Md. Akhtaruzzaman; Razzaque, Mohammd Abdur; Ahmed, Ishtiaque; Isnin, Ismail Fauzi
2014-01-01
For the past 20 years, many authors have focused their investigations on wireless sensor networks. Various issues related to wireless sensor networks such as energy minimization (optimization), compression schemes, self-organizing network algorithms, routing protocols, quality of service management, security, energy harvesting, etc., have been extensively explored. The three most important issues among these are energy efficiency, quality of service and security management. To get the best possible results in one or more of these issues in wireless sensor networks optimization is necessary. Furthermore, in number of applications (e.g., body area sensor networks, vehicular ad hoc networks) these issues might conflict and require a trade-off amongst them. Due to the high energy consumption and data processing requirements, the use of classical algorithms has historically been disregarded. In this context contemporary researchers started using bio-mimetic strategy-based optimization techniques in the field of wireless sensor networks. These techniques are diverse and involve many different optimization algorithms. As far as we know, most existing works tend to focus only on optimization of one specific issue of the three mentioned above. It is high time that these individual efforts are put into perspective and a more holistic view is taken. In this paper we take a step in that direction by presenting a survey of the literature in the area of wireless sensor network optimization concentrating especially on the three most widely used bio-mimetic algorithms, namely, particle swarm optimization, ant colony optimization and genetic algorithm. In addition, to stimulate new research and development interests in this field, open research issues, challenges and future research directions are highlighted. PMID:24368702
Bio-mimic optimization strategies in wireless sensor networks: a survey.
Adnan, Md Akhtaruzzaman; Abdur Razzaque, Mohammd; Ahmed, Ishtiaque; Isnin, Ismail Fauzi
2013-12-24
For the past 20 years, many authors have focused their investigations on wireless sensor networks. Various issues related to wireless sensor networks such as energy minimization (optimization), compression schemes, self-organizing network algorithms, routing protocols, quality of service management, security, energy harvesting, etc., have been extensively explored. The three most important issues among these are energy efficiency, quality of service and security management. To get the best possible results in one or more of these issues in wireless sensor networks optimization is necessary. Furthermore, in number of applications (e.g., body area sensor networks, vehicular ad hoc networks) these issues might conflict and require a trade-off amongst them. Due to the high energy consumption and data processing requirements, the use of classical algorithms has historically been disregarded. In this context contemporary researchers started using bio-mimetic strategy-based optimization techniques in the field of wireless sensor networks. These techniques are diverse and involve many different optimization algorithms. As far as we know, most existing works tend to focus only on optimization of one specific issue of the three mentioned above. It is high time that these individual efforts are put into perspective and a more holistic view is taken. In this paper we take a step in that direction by presenting a survey of the literature in the area of wireless sensor network optimization concentrating especially on the three most widely used bio-mimetic algorithms, namely, particle swarm optimization, ant colony optimization and genetic algorithm. In addition, to stimulate new research and development interests in this field, open research issues, challenges and future research directions are highlighted.
Directory of Open Access Journals (Sweden)
Li MingChu
2017-01-01
Full Text Available The terrorist’s coordinated attack is becoming an increasing threat to western countries. By monitoring potential terrorists, security agencies are able to detect and destroy terrorist plots at their planning stage. Therefore, an optimal monitoring strategy for the domestic security agency becomes necessary. However, previous study about monitoring strategy generation fails to consider the information leakage, due to hackers and insider threat. Such leakage events may lead to failure of watching potential terrorists and destroying the plot, and cause a huge risk to public security. This paper makes two major contributions. Firstly, we develop a new Stackelberg game model for the security agency to generate optimal monitoring strategy with the consideration of information leakage. Secondly, we provide a double-oracle framework DO-TPDIL for calculation effectively. The experimental result shows that our approach can obtain robust strategies against information leakage with high feasibility and efficiency.
Directory of Open Access Journals (Sweden)
Daria Sanna
2011-01-01
Full Text Available We report a sampling strategy based on Mendelian Breeding Units (MBUs, representing an interbreeding group of individuals sharing a common gene pool. The identification of MBUs is crucial for case-control experimental design in association studies. The aim of this work was to evaluate the possible existence of bias in terms of genetic variability and haplogroup frequencies in the MBU sample, due to severe sample selection. In order to reach this goal, the MBU sampling strategy was compared to a standard selection of individuals according to their surname and place of birth. We analysed mitochondrial DNA variation (first hypervariable segment and coding region in unrelated healthy subjects from two different areas of Sardinia: the area around the town of Cabras and the western Campidano area. No statistically significant differences were observed when the two sampling methods were compared, indicating that the stringent sample selection needed to establish a MBU does not alter original genetic variability and haplogroup distribution. Therefore, the MBU sampling strategy can be considered a useful tool in association studies of complex traits.
Population Modeling Approach to Optimize Crop Harvest Strategy. The Case of Field Tomato.
Tran, Dinh T; Hertog, Maarten L A T M; Tran, Thi L H; Quyen, Nguyen T; Van de Poel, Bram; Mata, Clara I; Nicolaï, Bart M
2017-01-01
In this study, the aim is to develop a population model based approach to optimize fruit harvesting strategies with regard to fruit quality and its derived economic value. This approach was applied to the case of tomato fruit harvesting under Vietnamese conditions. Fruit growth and development of tomato (cv. "Savior") was monitored in terms of fruit size and color during both the Vietnamese winter and summer growing seasons. A kinetic tomato fruit growth model was applied to quantify biological fruit-to-fruit variation in terms of their physiological maturation. This model was successfully calibrated. Finally, the model was extended to translate the fruit-to-fruit variation at harvest into the economic value of the harvested crop. It can be concluded that a model based approach to the optimization of harvest date and harvest frequency with regard to economic value of the crop as such is feasible. This approach allows growers to optimize their harvesting strategy by harvesting the crop at more uniform maturity stages meeting the stringent retail demands for homogeneous high quality product. The total farm profit would still depend on the impact a change in harvesting strategy might have on related expenditures. This model based harvest optimisation approach can be easily transferred to other fruit and vegetable crops improving homogeneity of the postharvest product streams.
Optimizing Implementation of Hepatitis C Birth-Cohort Screening and Treatment Strategies
Directory of Open Access Journals (Sweden)
Yuankun Li MS
2017-01-01
Full Text Available Background: Chronic hepatitis C (HCV is a significant public health problem affecting more than three million Americans. The US health care systems are ramping up costly HCV screening and treatment efforts with limited budget. We determine the optimal implementation of HCV birth-cohort screening and treatment strategies under budget constraints and health care payer’s perspective. Methods: Markov model and scenario-based simulation optimization. The target population is birth cohort born between 1945 and 1975. The interventions are allocating annual budget to screen a proportion of the target population and treat a proportion of the identified chronic HCV-positive patients over 10 years. Outcomes measure is to maximize lifetime discounted quality-adjusted life-years. Results: Allocate a percentage of the annual budget to screening, then treat patients with the remaining budget and prioritize the sickest patients. When the budget is $1 billion/year, the best strategy is to allocate the entire budget to treatment. When the budget is $5 billion/year, it is optimal to allocate 60% of the budget to screening in the first 2 years and 0% thereafter for age cohort 40 to 49; and allocate 20% of the budget to screening starting in year 3 for age cohorts 50 to 59 and 60 to 69. Health benefits are sensitive to budget in the first 2 years. Results are not sensitive to distribution of fibrosis stages by awareness of HCV. Conclusion: When budget is limited, all efforts should be focused on early treatment. With higher budget, better population health outcomes are achieved by reserving some budget for HCV screening while implementing a priority-based treatment strategy. This work has broad applicability to diverse health care systems and helps determine how much effort should be devoted to screening versus treatment under resource limitations.
The Effect of Exit Strategy on Optimal Portfolio Selection with Birandom Returns
Directory of Open Access Journals (Sweden)
Guohua Cao
2013-01-01
Full Text Available The aims of this paper are to use a birandom variable to denote the stock return selected by some recurring technical patterns and to study the effect of exit strategy on optimal portfolio selection with birandom returns. Firstly, we propose a new method to estimate the stock return and use birandom distribution to denote the final stock return which can reflect the features of technical patterns and investors' heterogeneity simultaneously; secondly, we build a birandom safety-first model and design a hybrid intelligent algorithm to help investors make decisions; finally, we innovatively study the effect of exit strategy on the given birandom safety-first model. The results indicate that (1 the exit strategy affects the proportion of portfolio, (2 the performance of taking the exit strategy is better than when the exit strategy is not taken, if the stop-loss point and the stop-profit point are appropriately set, and (3 the investor using the exit strategy become conservative.
International Nuclear Information System (INIS)
Romero, Alberto; Millar, Dean; Carvalho, Monica; Maestre, José M.; Camacho, Eduardo F.
2015-01-01
Mine dewatering can represent up to 5% of the total energy demand of a mine, and is one of the mine systems that aim to guarantee safe operating conditions. As mines go deeper, dewatering pumping heads become bigger, potentially involving several lift stages. Greater depth does not only mean greater dewatering cost, but more complex systems that require more sophisticated control systems, especially if mine operators wish to gain benefits from demand response incentives that are becoming a routine part of electricity tariffs. This work explores a two stage economic optimization procedure of an underground mine dewatering system, comprising two lifting stages, each one including a pump station and a water reservoir. First, the system design is optimized considering hourly characteristic dewatering demands for twelve days, one day representing each month of the year to account for seasonal dewatering demand variations. This design optimization minimizes the annualized cost of the system, and therefore includes the investment costs in underground reservoirs. Reservoir size, as well as an hourly pumping operation plan are calculated for specific operating environments, defined by characteristic hourly electricity prices and water inflows (seepage and water use from production activities), at best known through historical observations for the previous year. There is no guarantee that the system design will remain optimal when it faces the water inflows and market determined electricity prices of the year ahead, or subsequent years ahead, because these remain unknown at design time. Consequently, the dewatering optimized system design is adopted subsequently as part of a Model Predictive Control (MPC) strategy that adaptively maintains optimality during the operations phase. Centralized, distributed and non-centralized MPC strategies are explored. Results show that the system can be reliably controlled using any of these control strategies proposed. Under the operating
Directory of Open Access Journals (Sweden)
Yi Tang
2017-05-01
Full Text Available In a competitive electricity market with substantial involvement of renewable electricity, maximizing profits by optimizing bidding strategies is crucial to different power producers including conventional power plants and renewable ones. This paper proposes a game-theoretic bidding optimization method based on bi-level programming, where power producers are at the upper level and utility companies are at the lower level. The competition among the multiple power producers is formulated as a non-cooperative game in which bidding curves are their strategies, while uniform clearing pricing is considered for utility companies represented by an independent system operator. Consequently, based on the formulated game model, the bidding strategies for power producers are optimized for the day-ahead market and the intraday market with considering the properties of renewable energy; and the clearing pricing for the utility companies, with respect to the power quantity from different power producers, is optimized simultaneously. Furthermore, a distributed algorithm is provided to search the solution of the generalized Nash equilibrium. Finally, simulation results were performed and discussed to verify the feasibility and effectiveness of the proposed non-cooperative game-based bi-level optimization approach.
Directory of Open Access Journals (Sweden)
Musa Danjuma SHEHU
2008-06-01
Full Text Available This paper lays emphasis on formulation of two dimensional differential games via optimal control theory and consideration of control systems whose dynamics is described by a system of Ordinary Differential equation in the form of linear equation under the influence of two controls U(. and V(.. Base on this, strategies were constructed. Hence we determine the optimal strategy for a control say U(. under a perturbation generated by the second control V(. within a given manifold M.
Beyond the drugs : non-pharmacological strategies to optimize procedural care in children
Leroy, Piet L.; Costa, Luciane R.; Emmanouil, Dimitris; van Beukering, Alice; Franck, Linda S.
2016-01-01
Purpose of review Painful and/or stressful medical procedures mean a substantial burden for sick children. There is good evidence that procedural comfort can be optimized by a comprehensive comfort-directed policy containing the triad of non-pharmacological strategies (NPS) in all cases, timely or
Optimal and Robust Switching Control Strategies : Theory, and Applications in Traffic Management
Hajiahmadi, M.
2015-01-01
Macroscopic modeling, predictive and robust control and route guidance for large-scale freeway and urban traffic networks are the main focus of this thesis. In order to increase the efficiency of our control strategies, we propose several mathematical and optimization techniques. Moreover, in the
Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design
Leube, P. C.; Geiges, A.; Nowak, W.
2012-02-01
Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems
Directory of Open Access Journals (Sweden)
Leilei Cao
2016-01-01
Full Text Available A Guiding Evolutionary Algorithm (GEA with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared.
Comparison of three control strategies for optimization of spray dryer operation
DEFF Research Database (Denmark)
Petersen, Lars Norbert; Poulsen, Niels Kjølstad; Niemann, Hans Henrik
2017-01-01
controllers for operation of a four-stage spray dryer. The three controllers are a proportional-integral (PI) controller that is used in industrial practice for spray dryer operation, a linear model predictive controller with real-time optimization (MPC with RTO, MPC-RTO), and an economically optimizing...... nonlinear model predictive controller (E-NMPC). The MPC with RTO is based on the same linear state space model in the MPC and the RTO layer. The E-NMPC consists of a single optimization layer that uses a nonlinear system of ordinary differential equations for its predictions. The PI control strategy has...... the production rate, while minimizing the energy consumption, keeping the residual moisture content of the powder below a maximum limit, and avoiding that the powder sticks to the chamber walls. We use an industrially recorded disturbance scenario in order to produce realistic simulations and conclusions...
Sampling strategies for indoor radon investigations
International Nuclear Information System (INIS)
Prichard, H.M.
1983-01-01
Recent investigations prompted by concern about the environmental effects of residential energy conservation have produced many accounts of indoor radon concentrations far above background levels. In many instances time-normalized annual exposures exceeded the 4 WLM per year standard currently used for uranium mining. Further investigations of indoor radon exposures are necessary to judge the extent of the problem and to estimate the practicality of health effects studies. A number of trends can be discerned as more indoor surveys are reported. It is becoming increasingly clear that local geological factors play a major, if not dominant role in determining the distribution of indoor radon concentrations in a given area. Within a giving locale, indoor radon concentrations tend to be log-normally distributed, and sample means differ markedly from one region to another. The appreciation of geological factors and the general log-normality of radon distributions will improve the accuracy of population dose estimates and facilitate the design of preliminary health effects studies. The relative merits of grab samples, short and long term integrated samples, and more complicated dose assessment strategies are discussed in the context of several types of epidemiological investigations. A new passive radon sampler with a 24 hour integration time is described and evaluated as a tool for pilot investigations
Multiple sensitive estimation and optimal sample size allocation in the item sum technique.
Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz
2018-01-01
For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DEFF Research Database (Denmark)
Hu, Weihao; Chen, Zhe; Bak-Jensen, Birgitte
2010-01-01
markets in some ways, is chosen as the studied power system in this paper. Two kinds of BESS, based on polysulfide-bromine (PSB) and vanadium redox (VRB) battery technologies, are studies in the paper. Simulation results show, that the proposed optimal operation strategy is an effective measure to achieve......Since the hourly spot market price is available one day ahead, the price could be transferred to the consumers and they may have some motivations to install an energy storage system in order to save their energy costs. This paper presents an optimal operation strategy for a battery energy storage...
Trip-oriented stochastic optimal energy management strategy for plug-in hybrid electric bus
International Nuclear Information System (INIS)
Du, Yongchang; Zhao, Yue; Wang, Qinpu; Zhang, Yuanbo; Xia, Huaicheng
2016-01-01
A trip-oriented stochastic optimal energy management strategy for plug-in hybrid electric bus is presented in this paper, which includes the offline stochastic dynamic programming part and the online implementation part performed by equivalent consumption minimization strategy. In the offline part, historical driving cycles of the fixed route are divided into segments according to the position of bus stops, and then a segment-based stochastic driving condition model based on Markov chain is built. With the segment-based stochastic model obtained, the control set for real-time implemented equivalent consumption minimization strategy can be achieved by solving the offline stochastic dynamic programming problem. Results of stochastic dynamic programming are converted into a 3-dimensional lookup table of parameters for online implemented equivalent consumption minimization strategy. The proposed strategy is verified by both simulation and hardware-in-loop test of real-world driving cycle on an urban bus route. Simulation results show that the proposed method outperforms both the well-tuned equivalent consumption minimization strategy and the rule-based strategy in terms of fuel economy, and even proved to be close to the optimal result obtained by dynamic programming. Furthermore, the practical application potential of the proposed control method was proved by hardware-in-loop test. - Highlights: • A stochastic problem was formed based on a stochastic segment-based driving condition model. • Offline stochastic dynamic programming was employed to solve the stochastic problem. • The instant power split decision was made by the online equivalent consumption minimization strategy. • Good performance in fuel economy of the proposed method was verified by simulation results. • Practical application potential of the proposed method was verified by the hardware-in-loop test results.
Implementing optimal thinning strategies
Kurt H. Riitters; J. Douglas Brodie
1984-01-01
Optimal thinning regimes for achieving several management objectives were derived from two stand-growth simulators by dynamic programming. Residual mean tree volumes were then plotted against stand density management diagrams. The results supported the use of density management diagrams for comparing, checking, and implementing the results of optimization analyses....
Directory of Open Access Journals (Sweden)
Charles Tatkeu
2008-12-01
Full Text Available We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.
Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles
2008-12-01
We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.
Super-capacitors fuel-cell hybrid electric vehicle optimization and control strategy development
International Nuclear Information System (INIS)
Paladini, Vanessa; Donateo, Teresa; De Risi, Arturo; Laforgia, Domenico
2007-01-01
In the last decades, due to emissions reduction policies, research focused on alternative powertrains among which hybrid electric vehicles (HEVs) powered by fuel cells are becoming an attractive solution. One of the main issues of these vehicles is the energy management in order to improve the overall fuel economy. The present investigation aims at identifying the best hybrid vehicle configuration and control strategy to reduce fuel consumption. The study focuses on a car powered by a fuel cell and equipped with two secondary energy storage devices: batteries and super-capacitors. To model the powertrain behavior an on purpose simulation program called ECoS has been developed in Matlab/Simulink environment. The fuel cell model is based on the Amphlett theory. The battery and the super-capacitor models account for charge/discharge efficiency. The analyzed powertrain is also equipped with an energy regeneration system to recover braking energy. The numerical optimization of vehicle configuration and control strategy of the hybrid electric vehicle has been carried out with a multi objective genetic algorithm. The goal of the optimization is the reduction of hydrogen consumption while sustaining the battery state of charge. By applying the algorithm to different driving cycles, several optimized configurations have been identified and discussed
Directory of Open Access Journals (Sweden)
Jun Yang
2015-08-01
Full Text Available The carbon emissions trading market and direct power purchases by large consumers are two promising directions of power system development. To trace the carbon emission flow in the power grid, the theory of carbon emission flow is improved by allocating power loss to the load side. Based on the improved carbon emission flow theory, an optimal dispatch model is proposed to optimize the cost of both large consumers and the power grid, which will benefit from the carbon emissions trading market. Moreover, to better simulate reality, the direct purchase of power by large consumers is also considered in this paper. The OPF (optimal power flow method is applied to solve the problem. To evaluate our proposed optimal dispatch strategy, an IEEE 30-bus system is used to test the performance. The effects of the price of carbon emissions and the price of electricity from normal generators and low-carbon generators with regards to the optimal dispatch are analyzed. The simulation results indicate that the proposed strategy can significantly reduce both the operation cost of the power grid and the power utilization cost of large consumers.
Chapter 2: Sampling strategies in forest hydrology and biogeochemistry
Roger C. Bales; Martha H. Conklin; Branko Kerkez; Steven Glaser; Jan W. Hopmans; Carolyn T. Hunsaker; Matt Meadows; Peter C. Hartsough
2011-01-01
Many aspects of forest hydrology have been based on accurate but not necessarily spatially representative measurements, reflecting the measurement capabilities that were traditionally available. Two developments are bringing about fundamental changes in sampling strategies in forest hydrology and biogeochemistry: (a) technical advances in measurement capability, as is...
Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng
2015-03-01
Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.
Optimizing complement-activating antibody-based cancer immunotherapy: a feasible strategy?
Directory of Open Access Journals (Sweden)
Maio Michele
2004-06-01
Full Text Available Abstract Passive immunotherapy with monoclonal antibodies (mAb targeted to specific tumor-associated antigens is amongst the most rapidly expanding approaches to biological therapy of cancer. However, until now a limited number of therapeutic mAb has demonstrated clinical efficacy in selected neoplasia. Results emerging from basic research point to a deeper characterization of specific biological features of neoplastic cells as crucial to optimize the clinical potential of therapeutic mAb, and to identify cancer patients who represent the best candidates to antibody-based immunotherapy. Focus on the tissue distribution and on the functional role of membrane complement-regulatory proteins such as Protectin (CD59, which under physiologic conditions protects tissues from Complement (C-damage, might help to optimize the efficacy of immunotherapeutic strategies based on C-activating mAb.
Sampling strategy to develop a primary core collection of apple ...
African Journals Online (AJOL)
PRECIOUS
2010-01-11
Jan 11, 2010 ... Physiology and Molecular Biology for Fruit, Tree, Beijing 100193, China. ... analyzed on genetic diversity to ensure their represen- .... strategy, cluster and random sampling. .... on isozyme data―A simulation study, Theor.
Research on the optimization strategy of web search engine based on data mining
Chen, Ronghua
2018-04-01
With the wide application of search engines, web site information has become an important way for people to obtain information. People have found that they are growing in an increasingly explosive manner. Web site information is verydifficult to find the information they need, and now the search engine can not meet the need, so there is an urgent need for the network to provide website personalized information service, data mining technology for this new challenge is to find a breakthrough. In order to improve people's accuracy of finding information from websites, a website search engine optimization strategy based on data mining is proposed, and verified by website search engine optimization experiment. The results show that the proposed strategy improves the accuracy of the people to find information, and reduces the time for people to find information. It has an important practical value.
Multi-objective optimal strategy for generating and bidding in the power market
International Nuclear Information System (INIS)
Peng Chunhua; Sun Huijuan; Guo Jianfeng; Liu Gang
2012-01-01
Highlights: ► A new benefit/risk/emission comprehensive generation optimization model is established. ► A hybrid multi-objective differential evolution optimization algorithm is designed. ► Fuzzy set theory and entropy weighting method are employed to extract the general best solution. ► The proposed approach of generating and bidding is efficient for maximizing profit and minimizing both risk and emissions. - Abstract: Based on the coordinated interaction between units output and electricity market prices, the benefit/risk/emission comprehensive generation optimization model with objectives of maximal profit and minimal bidding risk and emissions is established. A hybrid multi-objective differential evolution optimization algorithm, which successfully integrates Pareto non-dominated sorting with differential evolution algorithm and improves individual crowding distance mechanism and mutation strategy to avoid premature and unevenly search, is designed to achieve Pareto optimal set of this model. Moreover, fuzzy set theory and entropy weighting method are employed to extract one of the Pareto optimal solutions as the general best solution. Several optimization runs have been carried out on different cases of generation bidding and scheduling. The results confirm the potential and effectiveness of the proposed approach in solving the multi-objective optimization problem of generation bidding and scheduling. In addition, the comparison with the classical optimization algorithms demonstrates the superiorities of the proposed algorithm such as integrality of Pareto front, well-distributed Pareto-optimal solutions, high search speed.
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
Optimal fault-tolerant control strategy of a solid oxide fuel cell system
Wu, Xiaojuan; Gao, Danhui
2017-10-01
For solid oxide fuel cell (SOFC) development, load tracking, heat management, air excess ratio constraint, high efficiency, low cost and fault diagnosis are six key issues. However, no literature studies the control techniques combining optimization and fault diagnosis for the SOFC system. An optimal fault-tolerant control strategy is presented in this paper, which involves four parts: a fault diagnosis module, a switching module, two backup optimizers and a controller loop. The fault diagnosis part is presented to identify the SOFC current fault type, and the switching module is used to select the appropriate backup optimizer based on the diagnosis result. NSGA-II and TOPSIS are employed to design the two backup optimizers under normal and air compressor fault states. PID algorithm is proposed to design the control loop, which includes a power tracking controller, an anode inlet temperature controller, a cathode inlet temperature controller and an air excess ratio controller. The simulation results show the proposed optimal fault-tolerant control method can track the power, temperature and air excess ratio at the desired values, simultaneously achieving the maximum efficiency and the minimum unit cost in the case of SOFC normal and even in the air compressor fault.
Ouyang, Qi; Lu, Wenxi; Lin, Jin; Deng, Wenbing; Cheng, Weiguo
2017-08-01
The surrogate-based simulation-optimization techniques are frequently used for optimal groundwater remediation design. When this technique is used, surrogate errors caused by surrogate-modeling uncertainty may lead to generation of infeasible designs. In this paper, a conservative strategy that pushes the optimal design into the feasible region was used to address surrogate-modeling uncertainty. In addition, chance-constrained programming (CCP) was adopted to compare with the conservative strategy in addressing this uncertainty. Three methods, multi-gene genetic programming (MGGP), Kriging (KRG) and support vector regression (SVR), were used to construct surrogate models for a time-consuming multi-phase flow model. To improve the performance of the surrogate model, ensemble surrogates were constructed based on combinations of different stand-alone surrogate models. The results show that: (1) the surrogate-modeling uncertainty was successfully addressed by the conservative strategy, which means that this method is promising for addressing surrogate-modeling uncertainty. (2) The ensemble surrogate model that combines MGGP with KRG showed the most favorable performance, which indicates that this ensemble surrogate can utilize both stand-alone surrogate models to improve the performance of the surrogate model.
Institute of Scientific and Technical Information of China (English)
DeqingTan; GuangzhongLiu
2004-01-01
The Bertrand model of two firms' static multidimensional game with incomplete information for two kinds of product with certain substitution is discussed in the paper,and analyzes influences of the firms' forecasting results of total market demands on their optimal strategies according to marxet information. The conclusions are that the more a firm masters market information, the greater differences of forecasted values and expected values of market demands for products have influence upon equilibrium strategies; conversely, the less they have influence upon equilibrium strategies.
Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu
2017-01-01
This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.
International Nuclear Information System (INIS)
Park, Jungsoo; Song, Soonho; Lee, Kyo Seung
2015-01-01
Highlights: • Model-based control of dual-loop EGR system is performed. • EGR split index is developed to provide non-dimensional index for optimization. • EGR rates are calibrated using EGR split index at specific operating conditions. • Multi-objective Pareto optimization is performed to minimize NO X and BSFC. • Optimum split strategies are suggested with LP-rich dual-loop EGR at high load. - Abstract: A proposed dual-loop exhaust-gas recirculation (EGR) system that combines the features of high-pressure (HP) and low-pressure (LP) systems is considered a key technology for improving the combustion behavior of diesel engines. The fraction of HP and LP flows, known as the EGR split, for a given dual-loop EGR rate play an important role in determining the engine performance and emission characteristics. Therefore, identifying the proper EGR split is important for the engine optimization and calibration processes, which affect the EGR response and deNO X efficiencies. The objective of this research was to develop a dual-loop EGR split strategy using numerical analysis and one-dimensional (1D) cycle simulation. A control system was modeled by coupling the 1D cycle simulation and the control logic. An EGR split index was developed to investigate the HP/LP split effects on the engine performance and emissions. Using the model-based control system, a multi-objective Pareto (MOP) analysis was used to minimize the NO X formation and fuel consumption through optimized engine operating parameters. The MOP analysis was performed using a response surface model extracted from Latin hypercube sampling as a fractional factorial design of experiment. By using an LP rich dual-loop EGR, a high EGR rate was attained at low, medium, and high engine speeds, increasing the applicable load ranges compared to base conditions
Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina
2017-06-13
Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.
Simultaneous beam sampling and aperture shape optimization for SPORT.
Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei
2015-02-01
Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case
Simultaneous beam sampling and aperture shape optimization for SPORT
Energy Technology Data Exchange (ETDEWEB)
Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Ye, Yinyu [Department of Management Science and Engineering, Stanford University, Stanford, California 94305 (United States)
2015-02-15
Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and
Simultaneous beam sampling and aperture shape optimization for SPORT
International Nuclear Information System (INIS)
Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu
2015-01-01
Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and
An energy-efficient adaptive sampling scheme for wireless sensor networks
Masoum, Alireza; Meratnia, Nirvana; Havinga, Paul J.M.
2013-01-01
Wireless sensor networks are new monitoring platforms. To cope with their resource constraints, in terms of energy and bandwidth, spatial and temporal correlation in sensor data can be exploited to find an optimal sampling strategy to reduce number of sampling nodes and/or sampling frequencies while
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
Energy Technology Data Exchange (ETDEWEB)
Gao Hua [Department of Astronomy, School of Physics, Peking University, Beijing 100871 (China); Ho, Luis C. [Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871 (China)
2017-08-20
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R -band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
Gao, Hua; Ho, Luis C.
2017-08-01
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R-band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
International Nuclear Information System (INIS)
Gao Hua; Ho, Luis C.
2017-01-01
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R -band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.
An Overview of Optimizing Strategies for Flotation Banks
Directory of Open Access Journals (Sweden)
Miguel Maldonado
2012-10-01
Full Text Available A flotation bank is a serial arrangement of cells. How to optimally operate a bank remains a challenge. This article reviews three reported strategies: air profiling, mass-pull (froth velocity profiling and Peak Air Recovery (PAR profiling. These are all ways of manipulating the recovery profile down a bank, which may be the property being exploited. Mathematical analysis has shown that a flat cell-by-cell recovery profile maximizes the separation of two floatable minerals for a given target bank recovery when the relative floatability is constant down the bank. Available bank survey data are analyzed with respect to recovery profiling. Possible variations on recovery profile to minimize entrainment are discussed.
Dols, W Stuart; Persily, Andrew K; Morrow, Jayne B; Matzke, Brett D; Sego, Landon H; Nuffer, Lisa L; Pulsipher, Brent A
2010-01-01
In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones.
Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning
Directory of Open Access Journals (Sweden)
Julian Ricardo Diaz Posada
2017-01-01
Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.
Sampling strategy for a large scale indoor radiation survey - a pilot project
International Nuclear Information System (INIS)
Strand, T.; Stranden, E.
1986-01-01
Optimisation of a stratified random sampling strategy for large scale indoor radiation surveys is discussed. It is based on the results from a small scale pilot project where variances in dose rates within different categories of houses were assessed. By selecting a predetermined precision level for the mean dose rate in a given region, the number of measurements needed can be optimised. The results of a pilot project in Norway are presented together with the development of the final sampling strategy for a planned large scale survey. (author)
Optimizing decommissioning strategies
International Nuclear Information System (INIS)
Passant, F.H.
1993-01-01
Many different approaches can be considered for achieving satisfactory decommissioning of nuclear installations. These can embrace several different engineering actions at several stages, with time variations between the stages. Multi-attribute analysis can be used to help in the decision making process and to establish the optimum strategy. It has been used in the Usa and the UK to help in selecting preferred sites for radioactive waste repositories, and also in UK to help with the choice of preferred sites for locating PWR stations, and in selecting optimum decommissioning strategies
Directory of Open Access Journals (Sweden)
Xumei Chen
2017-09-01
Full Text Available The idea of corporate social responsibility has promoted bus operation agencies to rethink how to provide not only efficient but also environmentally friendly services for residents. A study on the potential of using an optimized design of skip-stop services, one of the essential operational strategies in practice, to reduce emissions is conducted in this paper. The underlying scheduling problem is formulated as a nonlinear programming problem with the primary objective of optimizing the total costs for both passengers and operating agencies, as well as with the secondary objective of minimizing bus emissions. A solution method is developed to solve the problem. A real-world case of Route 16 in Beijing is studied, in which the optimal scheduling strategy that maximizes the cost savings and environmental benefits is determined. The costs and emissions of the proposed scheduling strategy are compared with the optimal scheduling with skip-stop services without considering bus emissions. The results show that the proposed scheduling strategy outperforms the other operating strategy with respect to operational costs and bus emissions. A sensitivity study is then conducted to investigate the impact of the fleet size in operations and passenger demand on the effectiveness of the proposed stop-skipping strategy considering bus emissions.
Optimal recharging strategy for battery-switch stations for electric vehicles in France
International Nuclear Information System (INIS)
Armstrong, M.; El Hajj Moussa, C.; Adnot, J.; Galli, A.; Riviere, P.
2013-01-01
Most papers that study the recharging of electric vehicles focus on charging the batteries at home and at the work-place. The alternative is for owners to exchange the battery at a specially equipped battery switch station (BSS). This paper studies strategies for the BSS to buy and sell the electricity through the day-ahead market. We determine what the optimal strategies would have been for a large fleet of EVs in 2010 and 2011, for the V2G and the G2V cases. These give the amount that the BSS should offer to buy or sell each hour of the day. Given the size of the fleet, the quantities of electricity bought and sold will displace the market equilibrium. Using the aggregate offers to buy and the bids to sell on the day-ahead market, we compute what the new prices and volumes transacted would be. While buying electricity for the G2V case incurs a cost, it would have been possible to generate revenue in the V2G case, if the arrivals of the EVs had been evenly spaced during the day. Finally, we compare the total cost of implementing the strategies with the cost of buying the same quantity of electricity from EDF. - Highlights: • Optimal strategies for buying/selling electricity through day-ahead auction market. • Given fleet size power bought and sold would change market price and volume. • New prices computed using aggregate offers to buy/sell power in 2010 and 2011. • Timing of arrival of EVs critical in V2G case. If evenly spaced BSS makes money. • Strategies are very robust even when French and German markets were coupled Nov. 2010
Directory of Open Access Journals (Sweden)
Mahmood Shafiee
2013-01-01
Full Text Available In offshore wind turbines, the blades are among the most critical and expensive components that suffer from different types of damage due to the harsh maritime environment and high load. The blade damages can be categorized into two types: the minor damage, which only causes a loss in wind capture without resulting in any turbine stoppage, and the major (catastrophic damage, which stops the wind turbine and can only be corrected by replacement. In this paper, we propose an optimal number-dependent preventive maintenance (NDPM strategy, in which a maintenance team is transported with an ordinary or expedited lead time to the offshore platform at the occurrence of the Nth minor damage or the first major damage, whichever comes first. The long-run expected cost of the maintenance strategy is derived, and the necessary conditions for an optimal solution are obtained. Finally, the proposed model is tested on real data collected from an offshore wind farm database. Also, a sensitivity analysis is conducted in order to evaluate the effect of changes in the model parameters on the optimal solution.
The Study of an Optimal Robust Design and Adjustable Ordering Strategies in the HSCM.
Liao, Hung-Chang; Chen, Yan-Kwang; Wang, Ya-huei
2015-01-01
The purpose of this study was to establish a hospital supply chain management (HSCM) model in which three kinds of drugs in the same class and with the same indications were used in creating an optimal robust design and adjustable ordering strategies to deal with a drug shortage. The main assumption was that although each doctor has his/her own prescription pattern, when there is a shortage of a particular drug, the doctor may choose a similar drug with the same indications as a replacement. Four steps were used to construct and analyze the HSCM model. The computation technology used included a simulation, a neural network (NN), and a genetic algorithm (GA). The mathematical methods of the simulation and the NN were used to construct a relationship between the factor levels and performance, while the GA was used to obtain the optimal combination of factor levels from the NN. A sensitivity analysis was also used to assess the change in the optimal factor levels. Adjustable ordering strategies were also developed to prevent drug shortages.
Mousavi, Seyed Hosein; Nazemi, Ali; Hafezalkotob, Ashkan
2015-03-01
With the formation of the competitive electricity markets in the world, optimization of bidding strategies has become one of the main discussions in studies related to market designing. Market design is challenged by multiple objectives that need to be satisfied. The solution of those multi-objective problems is searched often over the combined strategy space, and thus requires the simultaneous optimization of multiple parameters. The problem is formulated analytically using the Nash equilibrium concept for games composed of large numbers of players having discrete and large strategy spaces. The solution methodology is based on a characterization of Nash equilibrium in terms of minima of a function and relies on a metaheuristic optimization approach to find these minima. This paper presents some metaheuristic algorithms to simulate how generators bid in the spot electricity market viewpoint of their profit maximization according to the other generators' strategies, such as genetic algorithm (GA), simulated annealing (SA) and hybrid simulated annealing genetic algorithm (HSAGA) and compares their results. As both GA and SA are generic search methods, HSAGA is also a generic search method. The model based on the actual data is implemented in a peak hour of Tehran's wholesale spot market in 2012. The results of the simulations show that GA outperforms SA and HSAGA on computing time, number of function evaluation and computing stability, as well as the results of calculated Nash equilibriums by GA are less various and different from each other than the other algorithms.
International Nuclear Information System (INIS)
Safari, Jalal
2012-01-01
This paper proposes a variant of the Non-dominated Sorting Genetic Algorithm (NSGA-II) to solve a novel mathematical model for multi-objective redundancy allocation problems (MORAP). Most researchers about redundancy allocation problem (RAP) have focused on single objective optimization, while there has been some limited research which addresses multi-objective optimization. Also all mathematical multi-objective models of general RAP assume that the type of redundancy strategy for each subsystem is predetermined and known a priori. In general, active redundancy has traditionally received greater attention; however, in practice both active and cold-standby redundancies may be used within a particular system design. The choice of redundancy strategy then becomes an additional decision variable. Thus, the proposed model and solution method are to select the best redundancy strategy, type of components, and levels of redundancy for each subsystem that maximizes the system reliability and minimize total system cost under system-level constraints. This problem belongs to the NP-hard class. This paper presents a second-generation Multiple-Objective Evolutionary Algorithm (MOEA), named NSGA-II to find the best solution for the given problem. The proposed algorithm demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker (DM) with a complete picture of the optimal solution space. After finding the Pareto front, a procedure is used to select the best solution from the Pareto front. Finally, the advantages of the presented multi-objective model and of the proposed algorithm are illustrated by solving test problems taken from the literature and the robustness of the proposed NSGA-II is discussed.
Mate choice when males are in patches: optimal strategies and good rules of thumb.
Hutchinson, John M C; Halupka, Konrad
2004-11-07
In standard mate-choice models, females encounter males sequentially and decide whether to inspect the quality of another male or to accept a male already inspected. What changes when males are clumped in patches and there is a significant cost to travel between patches? We use stochastic dynamic programming to derive optimum strategies under various assumptions. With zero costs to returning to a male in the current patch, the optimal strategy accepts males above a quality threshold which is constant whenever one or more males in the patch remain uninspected; this threshold drops when inspecting the last male in the patch, so returns may occur only then and are never to a male in a previously inspected patch. With non-zero within-patch return costs, such a two-threshold rule still performs extremely well, but a more gradual decline in acceptance threshold is optimal. Inability to return at all need not decrease performance by much. The acceptance threshold should also decline if it gets harder to discover the last males in a patch. Optimal strategies become more complex when mean male quality varies systematically between patches or years, and females estimate this in a Bayesian manner through inspecting male qualities. It can then be optimal to switch patch before inspecting all males on a patch, or, exceptionally, to return to an earlier patch. We compare performance of various rules of thumb in these environments and in ones without a patch structure. A two-threshold rule performs excellently, as do various simplifications of it. The best-of-N rule outperforms threshold rules only in non-patchy environments with between-year quality variation. The cutoff rule performs poorly.
Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.
2017-08-01
Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.
International Nuclear Information System (INIS)
Neves, Diana; Silva, Carlos A.
2015-01-01
The present study uses the DHW (domestic hot water) electric backup from solar thermal systems to optimize the total electricity dispatch of an isolated mini-grid. The proposed approach estimates the hourly DHW load, and proposes and simulates different DR (demand response) strategies, from the supply side, to minimize the dispatch costs of an energy system. The case study consists on optimizing the electricity load, in a representative day with low solar radiation, in Corvo Island, Azores. The DHW backup is induced by three different demand patterns. The study compares different DR strategies: backup at demand (no strategy), pre-scheduled backup using two different imposed schedules, a strategy based on linear programming, and finally two strategies using genetic algorithms, with different formulations for DHW backup – one that assigns number of systems and another that assigns energy demand. It is concluded that pre-determined DR strategies may increase the generation costs, but DR strategies based on optimization algorithms are able to decrease generation costs. In particular, linear programming is the strategy that presents the lowest increase on dispatch costs, but the strategy based on genetic algorithms is the one that best minimizes both daily operation costs and total energy demand, of the system. - Highlights: • Integrated hourly model of DHW electric impact and electricity dispatch of isolated grid. • Proposal and comparison of different DR (demand response) strategies for DHW backup. • LP strategy presents 12% increase on total electric load, plus 5% on dispatch costs. • GA strategy presents 7% increase on total electric load, plus 8% on dispatch costs
Recruitment of hard-to-reach population subgroups via adaptations of the snowball sampling strategy.
Sadler, Georgia Robins; Lee, Hau-Chen; Lim, Rod Seung-Hwan; Fullerton, Judith
2010-09-01
Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author's program of research are provided to demonstrate how adaptations of snowball sampling can be used effectively in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more-vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or for research studies when the recruitment of a population-based sample is not essential.
Optimally Stopped Optimization
Vinci, Walter; Lidar, Daniel
We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.
Directory of Open Access Journals (Sweden)
Davood Mahmoodzadeh
2016-05-01
Full Text Available Groundwater in coastal areas is an essential source of freshwater that warrants protection from seawater intrusion as a priority based on an optimal management plan. Proper optimal management strategies can be developed using a variety of decision-making models. The present study aims to investigate the impacts of environmental changes on groundwater resources. For this purpose, a combined simulation-optimization model is employed that incorporates the SUTRA numerical model and the evolutionaty method of ant colony optimization. The fresh groundwater lens in Kish Island is used as a case study and different scenarios are considered for the likely enviromental changes. Results indicate that while variations in recharge rate form an important factor in the fresh groundwater lens, land-surface inundation due to rises in seawater level, especially in low-lying lands, is the major factor affecting the lens. Furthermore, impacts of environmental changes when effected into the Kish Island aquifer optimization management plan have led to a reduction of more than 20% in the allowable water extraction, indicating the high sensitivity of groundwater resources management plans in small islands to such variations.
Sampling strategies for the analysis of reactive low-molecular weight compounds in air
Henneken, H.
2006-01-01
Within this thesis, new sampling and analysis strategies for the determination of airborne workplace contaminants have been developed. Special focus has been directed towards the development of air sampling methods that involve diffusive sampling. In an introductory overview, the current
A novel optimal coordinated control strategy for the updated robot system for single port surgery.
Bai, Weibang; Cao, Qixin; Leng, Chuntao; Cao, Yang; Fujie, Masakatsu G; Pan, Tiewen
2017-09-01
Research into robotic systems for single port surgery (SPS) has become widespread around the world in recent years. A new robot arm system for SPS was developed, but its positioning platform and other hardware components were not efficient. Special features of the developed surgical robot system make good teleoperation with safety and efficiency difficult. A robot arm is combined and used as new positioning platform, and the remote center motion is realized by a new method using active motion control. A new mapping strategy based on kinematics computation and a novel optimal coordinated control strategy based on real-time approaching to a defined anthropopathic criterion configuration that is referred to the customary ease state of human arms and especially the configuration of boxers' habitual preparation posture are developed. The hardware components, control architecture, control system, and mapping strategy of the robotic system has been updated. A novel optimal coordinated control strategy is proposed and tested. The new robot system can be more dexterous, intelligent, convenient and safer for preoperative positioning and intraoperative adjustment. The mapping strategy can achieve good following and representation for the slave manipulator arms. And the proposed novel control strategy can enable them to complete tasks with higher maneuverability, lower possibility of self-interference and singularity free while teleoperating. Copyright © 2017 John Wiley & Sons, Ltd.
Lou, Xin Yuan; Sun, Lin Fu
2017-01-01
This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096
Mueller, Oliver M; Schlamann, Marc; Mueller, Daniela; Sandalcioglu, I Erol; Forsting, Michael; Sure, Ulrich
2011-09-01
Intracranial aneurysms (IAs) require deliberately selected treatment strategies as they are incrementally found prior to rupture and deleterious subarachnoid haemorrhage (SAH). Multiple and recurrent aneurysms necessitate both neurointerventionalists and neurosurgeons to optimize aneurysmal occlusion in an interdisciplinary effort. The present study was conducted to condense essential strategies from a single neurovascular centre with regard to the lessons learned. Medical charts of 321 consecutive patients treated for IAs at our centre from September 2008 until December 2010 were retrospectively analysed for clinical presentation of the aneurysms, multiplicity and treatment pathways. In addition, a selective Medline search was performed. A total of 321 patients with 492 aneurysms underwent occlusion of their symptomatic aneurysm: 132 (41.1%) individuals were treated surgically, 189 (58.2%) interventionally; 138 patients presented with a SAH, of these 44.2% were clipped and 55.8% were coiled. Aneurysms of the middle cerebral artery were primarily occluded surgically (88), whereas most of the aneurysms of the internal carotid artery and anterior communicating artery (114) were treated endovascularly. Multiple aneurysms (range 2-5 aneurysms/individual) were diagnosed in 98 patients (30.2%). During the study period 12 patients with recurrent aneurysms were allocated to another treatment modality (previously clip to coil and vice versa). Our data show that successful interdisciplinary occlusion of IAs is based on both neurosurgical and neurointerventional therapy. In particular, multiple and recurrent aneurysms require tailored individual approaches to aneurysmal occlusion. This is achieved by a consequent interdisciplinary pondering of the optimal strategy to occlude IAs in order to prevent SAH.
A Geostatistical Approach to Indoor Surface Sampling Strategies
DEFF Research Database (Denmark)
Schneider, Thomas; Petersen, Ole Holm; Nielsen, Allan Aasbjerg
1990-01-01
Particulate surface contamination is of concern in production industries such as food processing, aerospace, electronics and semiconductor manufacturing. There is also an increased awareness that surface contamination should be monitored in industrial hygiene surveys. A conceptual and theoretical...... framework for designing sampling strategies is thus developed. The distribution and spatial correlation of surface contamination can be characterized using concepts from geostatistical science, where spatial applications of statistics is most developed. The theory is summarized and particulate surface...... contamination, sampled from small areas on a table, have been used to illustrate the method. First, the spatial correlation is modelled and the parameters estimated from the data. Next, it is shown how the contamination at positions not measured can be estimated with kriging, a minimum mean square error method...
Exact Sampling and Decoding in High-Order Hidden Markov Models
Carter, S.; Dymetman, M.; Bouchard, G.
2012-01-01
We present a method for exact optimization and sampling from high order Hidden Markov Models (HMMs), which are generally handled by approximation techniques. Motivated by adaptive rejection sampling and heuristic search, we propose a strategy based on sequentially refining a lower-order language
Layered Multi-mode Optimal Control Strategy for Multi-MW Wind Turbine
Institute of Scientific and Technical Information of China (English)
KONG Yi-gang; WANG Zhi-xin
2008-01-01
The control strategy is one of the most important renewable technology, and an increasing number of multi-MW wind turbines are being developed with a variable speed-variable pitch (VS-VP) technology. The main objective of adopting a VS-VP technology is to improve the fast response speed and capture maximum energy. But the power generated by wind turbine changes rapidly because of the centinuous fluctuation of wind speed and direction. At the same time, wind energy conversion systems are of high order, time delays and strong nonlinear characteristics because of many uncertain factors. Based on analyzing the all dynamic processes of wind turbine, a kind of layered multi-mode optimal control strategy is presented which is that three control strategies: bang-bang, fuzzy and adaptive proportienai integral derivative (PID) are adopted according to different stages and expected performance of wind turbine to capture optimum wind power, compensate the nonlinearity and improve the wind turbine performance at low, rated and high wind speed.
Reliability optimization of series–parallel systems with mixed redundancy strategy in subsystems
International Nuclear Information System (INIS)
Abouei Ardakan, Mostafa; Zeinal Hamadani, Ali
2014-01-01
Traditionally in redundancy allocation problem (RAP), it is assumed that the redundant components are used based on a predefined active or standby strategies. Recently, some studies consider the situation that both active and standby strategies can be used in a specific system. However, these researches assume that the redundancy strategy for each subsystem can be either active or standby and determine the best strategy for these subsystems by using a proper mathematical model. As an extension to this assumption, a novel strategy, that is a combination of traditional active and standby strategies, is introduced. The new strategy is called mixed strategy which uses both active and cold-standby strategies in one subsystem simultaneously. Therefore, the problem is to determine the component type, redundancy level, number of active and cold-standby units for each subsystem in order to maximize the system reliability. To have a more practical model, the problem is formulated with imperfect switching of cold-standby redundant components and k-Erlang time-to-failure (TTF) distribution. As the optimization of RAP belongs to NP-hard class of problems, a genetic algorithm (GA) is developed. The new strategy and proposed GA are implemented on a well-known test problem in the literature which leads to interesting results. - Highlights: • In this paper the redundancy allocation problem (RAP) for a series–parallel system is considered. • Traditionally there are two main strategies for redundant component namely active and standby. • In this paper a new redundancy strategy which is called “Mixed” redundancy strategy is introduced. • Computational experiments demonstrate that implementing the new strategy lead to interesting results
Optimized IMAC-IMAC protocol for phosphopeptide recovery from complex biological samples
DEFF Research Database (Denmark)
Ye, Juanying; Zhang, Xumin; Young, Clifford
2010-01-01
using Fe(III)-NTA IMAC resin and it proved to be highly selective in the phosphopeptide enrichment of a highly diluted standard sample (1:1000) prior to MALDI MS analysis. We also observed that a higher iron purity led to an increased IMAC enrichment efficiency. The optimized method was then adapted...... to phosphoproteome analyses of cell lysates of high protein complexity. From either 20 microg of mouse sample or 50 microg of Drosophila melanogaster sample, more than 1000 phosphorylation sites were identified in each study using IMAC-IMAC and LC-MS/MS. We demonstrate efficient separation of multiply phosphorylated...... characterization of phosphoproteins in functional phosphoproteomics research projects....
Gazijahani, Farhad Samadi; Ravadanegh, Sajad Najafi; Salehi, Javad
2018-02-01
The inherent volatility and unpredictable nature of renewable generations and load demand pose considerable challenges for energy exchange optimization of microgrids (MG). To address these challenges, this paper proposes a new risk-based multi-objective energy exchange optimization for networked MGs from economic and reliability standpoints under load consumption and renewable power generation uncertainties. In so doing, three various risk-based strategies are distinguished by using conditional value at risk (CVaR) approach. The proposed model is specified as a two-distinct objective function. The first function minimizes the operation and maintenance costs, cost of power transaction between upstream network and MGs as well as power loss cost, whereas the second function minimizes the energy not supplied (ENS) value. Furthermore, the stochastic scenario-based approach is incorporated into the approach in order to handle the uncertainty. Also, Kantorovich distance scenario reduction method has been implemented to reduce the computational burden. Finally, non-dominated sorting genetic algorithm (NSGAII) is applied to minimize the objective functions simultaneously and the best solution is extracted by fuzzy satisfying method with respect to risk-based strategies. To indicate the performance of the proposed model, it is performed on the modified IEEE 33-bus distribution system and the obtained results show that the presented approach can be considered as an efficient tool for optimal energy exchange optimization of MGs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.
2016-11-01
The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.
Strategies for Optimizing Algal Biology for Enhanced Biomass Production
International Nuclear Information System (INIS)
Barry, Amanda N.; Starkenburg, Shawn R.; Sayre, Richard T.
2015-01-01
One of the most environmentally sustainable ways to produce high-energy density (oils) feed stocks for the production of liquid transportation fuels is from biomass. Photosynthetic carbon capture combined with biomass combustion (point source) and subsequent carbon capture and sequestration has also been proposed in the intergovernmental panel on climate change report as one of the most effective and economical strategies to remediate atmospheric greenhouse gases. To maximize photosynthetic carbon capture efficiency and energy-return-on-investment, we must develop biomass production systems that achieve the greatest yields with the lowest inputs. Numerous studies have demonstrated that microalgae have among the greatest potentials for biomass production. This is in part due to the fact that all alga cells are photoautotrophic, they have active carbon concentrating mechanisms to increase photosynthetic productivity, and all the biomass is harvestable unlike plants. All photosynthetic organisms, however, convert only a fraction of the solar energy they capture into chemical energy (reduced carbon or biomass). To increase aerial carbon capture rates and biomass productivity, it will be necessary to identify the most robust algal strains and increase their biomass production efficiency often by genetic manipulation. We review recent large-scale efforts to identify the best biomass producing strains and metabolic engineering strategies to improve aerial productivity. These strategies include optimization of photosynthetic light-harvesting antenna size to increase energy capture and conversion efficiency and the potential development of advanced molecular breeding techniques. To date, these strategies have resulted in up to twofold increases in biomass productivity.
Strategies for Optimizing Algal Biology for Enhanced Biomass Production
Energy Technology Data Exchange (ETDEWEB)
Barry, Amanda N.; Starkenburg, Shawn R.; Sayre, Richard T., E-mail: rsayre@newmexicoconsortium.org [Los Alamos National Laboratory, New Mexico Consortium, Los Alamos, NM (United States)
2015-02-02
One of the most environmentally sustainable ways to produce high-energy density (oils) feed stocks for the production of liquid transportation fuels is from biomass. Photosynthetic carbon capture combined with biomass combustion (point source) and subsequent carbon capture and sequestration has also been proposed in the intergovernmental panel on climate change report as one of the most effective and economical strategies to remediate atmospheric greenhouse gases. To maximize photosynthetic carbon capture efficiency and energy-return-on-investment, we must develop biomass production systems that achieve the greatest yields with the lowest inputs. Numerous studies have demonstrated that microalgae have among the greatest potentials for biomass production. This is in part due to the fact that all alga cells are photoautotrophic, they have active carbon concentrating mechanisms to increase photosynthetic productivity, and all the biomass is harvestable unlike plants. All photosynthetic organisms, however, convert only a fraction of the solar energy they capture into chemical energy (reduced carbon or biomass). To increase aerial carbon capture rates and biomass productivity, it will be necessary to identify the most robust algal strains and increase their biomass production efficiency often by genetic manipulation. We review recent large-scale efforts to identify the best biomass producing strains and metabolic engineering strategies to improve aerial productivity. These strategies include optimization of photosynthetic light-harvesting antenna size to increase energy capture and conversion efficiency and the potential development of advanced molecular breeding techniques. To date, these strategies have resulted in up to twofold increases in biomass productivity.
Directory of Open Access Journals (Sweden)
Jianlei Zhang
Full Text Available We study the evolution of cooperation among selfish individuals in the stochastic strategy spatial prisoner's dilemma game. We equip players with the particle swarm optimization technique, and find that it may lead to highly cooperative states even if the temptations to defect are strong. The concept of particle swarm optimization was originally introduced within a simple model of social dynamics that can describe the formation of a swarm, i.e., analogous to a swarm of bees searching for a food source. Essentially, particle swarm optimization foresees changes in the velocity profile of each player, such that the best locations are targeted and eventually occupied. In our case, each player keeps track of the highest payoff attained within a local topological neighborhood and its individual highest payoff. Thus, players make use of their own memory that keeps score of the most profitable strategy in previous actions, as well as use of the knowledge gained by the swarm as a whole, to find the best available strategy for themselves and the society. Following extensive simulations of this setup, we find a significant increase in the level of cooperation for a wide range of parameters, and also a full resolution of the prisoner's dilemma. We also demonstrate extreme efficiency of the optimization algorithm when dealing with environments that strongly favor the proliferation of defection, which in turn suggests that swarming could be an important phenomenon by means of which cooperation can be sustained even under highly unfavorable conditions. We thus present an alternative way of understanding the evolution of cooperative behavior and its ubiquitous presence in nature, and we hope that this study will be inspirational for future efforts aimed in this direction.
Directory of Open Access Journals (Sweden)
Martin M Gossner
Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when
Wang, Yan; Huang, Song; Ji, Zhicheng
2017-07-01
This paper presents a hybrid particle swarm optimization and gravitational search algorithm based on hybrid mutation strategy (HGSAPSO-M) to optimize economic dispatch (ED) including distributed generations (DGs) considering market-based energy pricing. A daily ED model was formulated and a hybrid mutation strategy was adopted in HGSAPSO-M. The hybrid mutation strategy includes two mutation operators, chaotic mutation, Gaussian mutation. The proposed algorithm was tested on IEEE-33 bus and results show that the approach is effective for this problem.
Optimization strategies for discrete multi-material stiffness optimization
DEFF Research Database (Denmark)
Hvejsel, Christian Frier; Lund, Erik; Stolpe, Mathias
2011-01-01
Design of composite laminated lay-ups are formulated as discrete multi-material selection problems. The design problem can be modeled as a non-convex mixed-integer optimization problem. Such problems are in general only solvable to global optimality for small to moderate sized problems. To attack...... which numerically confirm the sought properties of the new scheme in terms of convergence to a discrete solution....
International Nuclear Information System (INIS)
Ramirez-Marquez, Jose Emmanuel
2008-01-01
Up to now, of all the containers received in USA ports, roughly between 2% and 5% are scrutinized to determine if they could cause some type of danger or contain suspicious goods. Recently, concerns have been raised regarding the type of attack that could happen via container cargo leading to devastating economic, psychological and sociological effects. Overall, this paper is concerned with developing an inspection strategy that minimizes the total cost of inspection while maintaining a user-specified detection rate for 'suspicious' containers. In this respect, a general model for describing an inspection strategy is proposed. The strategy is regarded as an (n+1)-echelon decision tree where at each of these echelons, a decision has to be taken, regarding which sensor to be used, if at all. Second, based on the general decision-tree model, this paper presents a minimum cost container inspection strategy that conforms to a pre-specified user detection rate under the assumption that different sensors with different reliability and cost characteristics can be used. To generate an optimal inspection strategy, an evolutionary optimization approach known as probabilistic solution discovery algorithm has been used
Directory of Open Access Journals (Sweden)
Marya Viorst Gwadz
2017-05-01
Full Text Available Abstract Background More than half of persons living with HIV (PLWH in the United States are insufficiently engaged in HIV primary care and not taking antiretroviral therapy (ART, mainly African Americans/Blacks and Hispanics. In the proposed project, a potent and innovative research methodology, the multiphase optimization strategy (MOST, will be employed to develop a highly efficacious, efficient, scalable, and cost-effective intervention to increase engagement along the HIV care continuum. Whereas randomized controlled trials are valuable for evaluating the efficacy of multi-component interventions as a package, they are not designed to evaluate which specific components contribute to efficacy. MOST, a pioneering, engineering-inspired framework, addresses this problem through highly efficient randomized experimentation to assess the performance of individual intervention components and their interactions. We propose to use MOST to engineer an intervention to increase engagement along the HIV care continuum for African American/Black and Hispanic PLWH not well engaged in care and not taking ART. Further, the intervention will be optimized for cost-effectiveness. A similar set of multi-level factors impede both HIV care and ART initiation for African American/Black and Hispanic PLWH, primary among them individual- (e.g., substance use, distrust, fear, social- (e.g., stigma, and structural-level barriers (e.g., difficulties accessing ancillary services. Guided by a multi-level social cognitive theory, and using the motivational interviewing approach, the study will evaluate five distinct culturally based intervention components (i.e., counseling sessions, pre-adherence preparation, support groups, peer mentorship, and patient navigation, each designed to address a specific barrier to HIV care and ART initiation. These components are well-grounded in the empirical literature and were found acceptable, feasible, and promising with respect to
Perspectives on land snails - sampling strategies for isotopic analyses
Kwiecien, Ola; Kalinowski, Annika; Kamp, Jessica; Pellmann, Anna
2017-04-01
Since the seminal works of Goodfriend (1992), several substantial studies confirmed a relation between the isotopic composition of land snail shells (d18O, d13C) and environmental parameters like precipitation amount, moisture source, temperature and vegetation type. This relation, however, is not straightforward and site dependent. The choice of sampling strategy (discrete or bulk sampling) and cleaning procedure (several methods can be used, but comparison of their effects in an individual shell has yet not been achieved) further complicate the shell analysis. The advantage of using snail shells as environmental archive lies in the snails' limited mobility, and therefore an intrinsic aptitude of recording local and site-specific conditions. Also, snail shells are often found at dated archaeological sites. An obvious drawback is that shell assemblages rarely make up a continuous record, and a single shell is only a snapshot of the environmental setting at a given time. Shells from archaeological sites might represent a dietary component and cooking would presumably alter the isotopic signature of aragonite material. Consequently, a proper sampling strategy is of great importance and should be adjusted to the scientific question. Here, we compare and contrast different sampling approaches using modern shells collected in Morocco, Spain and Germany. The bulk shell approach (fine-ground material) yields information on mean environmental parameters within the life span of analyzed individuals. However, despite homogenization, replicate measurements of bulk shell material returned results with a variability greater than analytical precision (up to 2‰ for d18O, and up to 1‰ for d13C), calling for caution analyzing only single individuals. Horizontal high-resolution sampling (single drill holes along growth lines) provides insights into the amplitude of seasonal variability, while vertical high-resolution sampling (multiple drill holes along the same growth line
Optimization of fuel cycle strategies with constraints on uranium availability
International Nuclear Information System (INIS)
Silvennoinen, P.; Vira, J.; Westerberg, R.
1982-01-01
Optimization of nuclear reactor and fuel cycle strategies is studied under the influence of reduced availability of uranium. The analysis is separated in two distinct steps. First, the global situation is considered within given high and low projections of the installed capacity up to the year 2025. Uranium is regarded as an exhaustible resource whose production cost would increase proportionally to increasing cumulative exploitation. Based on the estimates obtained for the uranium cost, a global strategy is derived by splitting the installed capacity between light water reactor (LWR) once-through, LWR recycle, and fast breeder reactor (FBR) alternatives. In the second phase, the nuclear program of an individual utility is optimized within the constraints imposed from the global scenario. Results from the global scenarios indicate that in a reference case the uranium price would triple by the year 2000, and the price escalation would continue throughout the planning period. In a pessimistic growth scenario where the global nuclear capacity would not exceed 600 GW(electric) in 2025, the uranium price would almost double by 2000. In both global scenarios, FBRs would be introduced, in the reference case after 2000 and in the pessimistic case after 2010. In spite of the increases in the uranium prices, the levelized power production cost would increase only by 45% up to 2025 in the utility case provided that the plutonium is incinerated as a substitute fuel
An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples.
Riediger, Irina N; Hoffmaster, Alex R; Casanovas-Massana, Arnau; Biondo, Alexander W; Ko, Albert I; Stoddard, Robyn A
2016-01-01
Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden.
An optimal control strategies using vaccination and fogging in dengue fever transmission model
Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan
2017-08-01
This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.
DEFF Research Database (Denmark)
Larsen, Erik Huusfeldt; Löschner, Katrin
2014-01-01
microscopy (TEM) proved to be necessary for trouble shooting of results obtained from AFFF-LS-ICP-MS. Aqueous and enzymatic extraction strategies were tested for thorough sample preparation aiming at degrading the sample matrix and to liberate the AgNPs from chicken meat into liquid suspension. The resulting...... AFFF-ICP-MS fractograms, which corresponded to the enzymatic digests, showed a major nano-peak (about 80 % recovery of AgNPs spiked to the meat) plus new smaller peaks that eluted close to the void volume of the fractograms. Small, but significant shifts in retention time of AFFF peaks were observed...... for the meat sample extracts and the corresponding neat AgNP suspension, and rendered sizing by way of calibration with AgNPs as sizing standards inaccurate. In order to gain further insight into the sizes of the separated AgNPs, or their possible dissolved state, fractions of the AFFF eluate were collected...
Xu, Henglong; Yong, Jiang; Xu, Guangjian
2015-12-30
Sampling frequency is important to obtain sufficient information for temporal research of microfauna. To determine an optimal strategy for exploring the seasonal variation in ciliated protozoa, a dataset from the Yellow Sea, northern China was studied. Samples were collected with 24 (biweekly), 12 (monthly), 8 (bimonthly per season) and 4 (seasonally) sampling events. Compared to the 24 samplings (100%), the 12-, 8- and 4-samplings recovered 94%, 94%, and 78% of the total species, respectively. To reveal the seasonal distribution, the 8-sampling regime may result in >75% information of the seasonal variance, while the traditional 4-sampling may only explain sampling frequency, the biotic data showed stronger correlations with seasonal variables (e.g., temperature, salinity) in combination with nutrients. It is suggested that the 8-sampling events per year may be an optimal sampling strategy for ciliated protozoan seasonal research in marine ecosystems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimizing the HLT Buffer Strategy with Monte Carlo Simulations
AUTHOR|(CDS)2266763
2017-01-01
This project aims to optimize the strategy of utilizing the disk buffer for the High Level Trigger (HLT) of the LHCb experiment with the help of Monte-Carlo simulations. A method is developed, which simulates the Event Filter Farm (EFF) -- a computing cluster for the High Level Trigger -- as a compound of nodes with different performance properties. In this way, the behavior of the computing farm can be analyzed at a deeper level than before. It is demonstrated that the current operating strategy might be improved when data taking is reaching a mid-year scheduled stop or the year-end technical stop. The processing time of the buffered data can be lowered by distributing the detector data according to the processing power of the nodes instead of the relative disk size as long as the occupancy level of the buffer is low enough. Moreover, this ensures that data taken and stored on the buffer at the same time is processed by different nodes nearly simultaneously, which reduces load on the infrastructure.
OPTIMAL PRODUCTION–SALES STRATEGIES FOR A COMPANY AT CHANGING MARKET PRICE
Directory of Open Access Journals (Sweden)
ELLINA V. GRIGORIEVA
2015-01-01
Full Text Available In this paper we consider a monopoly producing a consumer good of high demand. Its market price depends on the volume of the produced goods described by the Cobb-Douglas production function. A production-sales activity of the firm is modeled by a nonlinear differential equation with two bounded controls: the share of the profit obtained from sales that the company reinvests into expanding own production, and the amount of short-term loans taken from a bank for the same purpose. The problem of maximizing discounted total profit on a given time interval is stated and solved. In order to find the optimal production and sales strategies for the company, the Pontryagin maximum principle is used. In order to investigate the arising two-point boundary value problem for the maximum principle, an analysis of the corresponding Hamiltonian system is applied. Based on a qualitative analysis of this system, we found that depending on the initial conditions and parameters of the model, both, singular and bang- bang controls can be optimal. Economic analysis of the optimal solutions is discussed.
Echolocating bats use a nearly time-optimal strategy to intercept prey.
Directory of Open Access Journals (Sweden)
Kaushik Ghose
2006-05-01
Full Text Available Acquisition of food in many animal species depends on the pursuit and capture of moving prey. Among modern humans, the pursuit and interception of moving targets plays a central role in a variety of sports, such as tennis, football, Frisbee, and baseball. Studies of target pursuit in animals, ranging from dragonflies to fish and dogs to humans, have suggested that they all use a constant bearing (CB strategy to pursue prey or other moving targets. CB is best known as the interception strategy employed by baseball outfielders to catch ballistic fly balls. CB is a time-optimal solution to catch targets moving along a straight line, or in a predictable fashion--such as a ballistic baseball, or a piece of food sinking in water. Many animals, however, have to capture prey that may make evasive and unpredictable maneuvers. Is CB an optimum solution to pursuing erratically moving targets? Do animals faced with such erratic prey also use CB? In this paper, we address these questions by studying prey capture in an insectivorous echolocating bat. Echolocating bats rely on sonar to pursue and capture flying insects. The bat's prey may emerge from foliage for a brief time, fly in erratic three-dimensional paths before returning to cover. Bats typically take less than one second to detect, localize and capture such insects. We used high speed stereo infra-red videography to study the three dimensional flight paths of the big brown bat, Eptesicus fuscus, as it chased erratically moving insects in a dark laboratory flight room. We quantified the bat's complex pursuit trajectories using a simple delay differential equation. Our analysis of the pursuit trajectories suggests that bats use a constant absolute target direction strategy during pursuit. We show mathematically that, unlike CB, this approach minimizes the time it takes for a pursuer to intercept an unpredictably moving target. Interestingly, the bat's behavior is similar to the interception strategy
Echolocating bats use a nearly time-optimal strategy to intercept prey.
Ghose, Kaushik; Horiuchi, Timothy K; Krishnaprasad, P S; Moss, Cynthia F
2006-05-01
Acquisition of food in many animal species depends on the pursuit and capture of moving prey. Among modern humans, the pursuit and interception of moving targets plays a central role in a variety of sports, such as tennis, football, Frisbee, and baseball. Studies of target pursuit in animals, ranging from dragonflies to fish and dogs to humans, have suggested that they all use a constant bearing (CB) strategy to pursue prey or other moving targets. CB is best known as the interception strategy employed by baseball outfielders to catch ballistic fly balls. CB is a time-optimal solution to catch targets moving along a straight line, or in a predictable fashion--such as a ballistic baseball, or a piece of food sinking in water. Many animals, however, have to capture prey that may make evasive and unpredictable maneuvers. Is CB an optimum solution to pursuing erratically moving targets? Do animals faced with such erratic prey also use CB? In this paper, we address these questions by studying prey capture in an insectivorous echolocating bat. Echolocating bats rely on sonar to pursue and capture flying insects. The bat's prey may emerge from foliage for a brief time, fly in erratic three-dimensional paths before returning to cover. Bats typically take less than one second to detect, localize and capture such insects. We used high speed stereo infra-red videography to study the three dimensional flight paths of the big brown bat, Eptesicus fuscus, as it chased erratically moving insects in a dark laboratory flight room. We quantified the bat's complex pursuit trajectories using a simple delay differential equation. Our analysis of the pursuit trajectories suggests that bats use a constant absolute target direction strategy during pursuit. We show mathematically that, unlike CB, this approach minimizes the time it takes for a pursuer to intercept an unpredictably moving target. Interestingly, the bat's behavior is similar to the interception strategy implemented in some
Sampling strategy for estimating human exposure pathways to consumer chemicals
Papadopoulou, Eleni; Padilla-Sanchez, Juan A.; Collins, Chris D.; Cousins, Ian T.; Covaci, Adrian; de Wit, Cynthia A.; Leonards, Pim E.G.; Voorspoels, Stefan; Thomsen, Cathrine; Harrad, Stuart; Haug, Line S.
2016-01-01
Human exposure to consumer chemicals has become a worldwide concern. In this work, a comprehensive sampling strategy is presented, to our knowledge being the first to study all relevant exposure pathways in a single cohort using multiple methods for assessment of exposure from each exposure pathway.
Directory of Open Access Journals (Sweden)
Zhenzhen Lei
2017-01-01
Full Text Available The driving pattern has an important influence on the parameter optimization of the energy management strategy (EMS for hybrid electric vehicles (HEVs. A new algorithm using simulated annealing particle swarm optimization (SA-PSO is proposed for parameter optimization of both the power system and control strategy of HEVs based on multiple driving cycles in order to realize the minimum fuel consumption without impairing the dynamic performance. Furthermore, taking the unknown of the actual driving cycle into consideration, an optimization method of the dynamic EMS based on driving pattern recognition is proposed in this paper. The simulation verifications for the optimized EMS based on multiple driving cycles and driving pattern recognition are carried out using Matlab/Simulink platform. The results show that compared with the original EMS, the former strategy reduces the fuel consumption by 4.36% and the latter one reduces the fuel consumption by 11.68%. A road test on the prototype vehicle is conducted and the effectiveness of the proposed EMS is validated by the test data.
Cheung, Chi Yuen; van der Heijden, Jaques; Hoogtanders, Karin; Christiaans, Maarten; Liu, Yan Lun; Chan, Yiu Han; Choi, Koon Shing; van de Plas, Afke; Shek, Chi Chung; Chau, Ka Foon; Li, Chun Sang; van Hooff, Johannes; Stolk, Leo
2008-02-01
Dried blood spot (DBS) sampling and high-performance liquid chromatography tandem-mass spectrometry have been developed in monitoring tacrolimus levels. Our center favors the use of limited sampling strategy and abbreviated formula to estimate the area under concentration-time curve (AUC(0-12)). However, it is inconvenient for patients because they have to wait in the center for blood sampling. We investigated the application of DBS method in tacrolimus level monitoring using limited sampling strategy and abbreviated AUC estimation approach. Duplicate venous samples were obtained at each time point (C(0), C(2), and C(4)). To determine the stability of blood samples, one venous sample was sent to our laboratory immediately. The other duplicate venous samples, together with simultaneous fingerprick blood samples, were sent to the University of Maastricht in the Netherlands. Thirty six patients were recruited and 108 sets of blood samples were collected. There was a highly significant relationship between AUC(0-12), estimated from venous blood samples, and fingerprick blood samples (r(2) = 0.96, P AUC(0-12) strategy as drug monitoring.
Energy Technology Data Exchange (ETDEWEB)
Tavakkoli-Moghaddam, R. [Department of Industrial Engineering, Faculty of Engineering, University of Tehran, P.O. Box 11365/4563, Tehran (Iran, Islamic Republic of); Department of Mechanical Engineering, The University of British Columbia, Vancouver (Canada)], E-mail: tavakoli@ut.ac.ir; Safari, J. [Department of Industrial Engineering, Science and Research Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of)], E-mail: jalalsafari@pideco.com; Sassani, F. [Department of Mechanical Engineering, The University of British Columbia, Vancouver (Canada)], E-mail: sassani@mech.ubc.ca
2008-04-15
This paper proposes a genetic algorithm (GA) for a redundancy allocation problem for the series-parallel system when the redundancy strategy can be chosen for individual subsystems. Majority of the solution methods for the general redundancy allocation problems assume that the redundancy strategy for each subsystem is predetermined and fixed. In general, active redundancy has received more attention in the past. However, in practice both active and cold-standby redundancies may be used within a particular system design and the choice of the redundancy strategy becomes an additional decision variable. Thus, the problem is to select the best redundancy strategy, component, and redundancy level for each subsystem in order to maximize the system reliability under system-level constraints. This belongs to the NP-hard class of problems. Due to its complexity, it is so difficult to optimally solve such a problem by using traditional optimization tools. It is demonstrated in this paper that GA is an efficient method for solving this type of problems. Finally, computational results for a typical scenario are presented and the robustness of the proposed algorithm is discussed.
International Nuclear Information System (INIS)
Tavakkoli-Moghaddam, R.; Safari, J.; Sassani, F.
2008-01-01
This paper proposes a genetic algorithm (GA) for a redundancy allocation problem for the series-parallel system when the redundancy strategy can be chosen for individual subsystems. Majority of the solution methods for the general redundancy allocation problems assume that the redundancy strategy for each subsystem is predetermined and fixed. In general, active redundancy has received more attention in the past. However, in practice both active and cold-standby redundancies may be used within a particular system design and the choice of the redundancy strategy becomes an additional decision variable. Thus, the problem is to select the best redundancy strategy, component, and redundancy level for each subsystem in order to maximize the system reliability under system-level constraints. This belongs to the NP-hard class of problems. Due to its complexity, it is so difficult to optimally solve such a problem by using traditional optimization tools. It is demonstrated in this paper that GA is an efficient method for solving this type of problems. Finally, computational results for a typical scenario are presented and the robustness of the proposed algorithm is discussed
International Nuclear Information System (INIS)
Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik
2016-01-01
The aim of this paper is to contribute to a more rapid determination of a series of samples containing 90 Sr by making the Cherenkov measurement of the daughter nuclide 90 Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of 90 Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21 h to 6.5 h, when assuming a MDA of 1 Bq/L and at a background count rate of approximately 0.8 cpm. - Highlights: • An approach roughly a factor of three more efficient than an un-optimized method. • The optimization gives a more efficient use of instrument time. • The efficiency increase ranges from a factor of three to 10, for 10 to 40 samples.
Directory of Open Access Journals (Sweden)
Hongwen He
2013-01-01
Full Text Available Energy management strategy influences the power performance and fuel economy of plug-in hybrid electric vehicles greatly. To explore the fuel-saving potential of a plug-in hybrid electric bus (PHEB, this paper searched the global optimal energy management strategy using dynamic programming (DP algorithm. Firstly, the simplified backward model of the PHEB was built which is necessary for DP algorithm. Then the torque and speed of engine and the torque of motor were selected as the control variables, and the battery state of charge (SOC was selected as the state variables. The DP solution procedure was listed, and the way was presented to find all possible control variables at every state of each stage in detail. Finally, the appropriate SOC increment is determined after quantizing the state variables, and then the optimal control of long driving distance of a specific driving cycle is replaced with the optimal control of one driving cycle, which reduces the computational time significantly and keeps the precision at the same time. The simulation results show that the fuel economy of the PEHB with the optimal energy management strategy is improved by 53.7% compared with that of the conventional bus, which can be a benchmark for the assessment of other control strategies.
Technical Note: Comparison of storage strategies of sea surface microlayer samples
Directory of Open Access Journals (Sweden)
K. Schneider-Zapp
2013-07-01
Full Text Available The sea surface microlayer (SML is an important biogeochemical system whose physico-chemical analysis often necessitates some degree of sample storage. However, many SML components degrade with time so the development of optimal storage protocols is paramount. We here briefly review some commonly used treatment and storage protocols. Using freshwater and saline SML samples from a river estuary, we investigated temporal changes in surfactant activity (SA and the absorbance and fluorescence of chromophoric dissolved organic matter (CDOM over four weeks, following selected sample treatment and storage protocols. Some variability in the effectiveness of individual protocols most likely reflects sample provenance. None of the various protocols examined performed any better than dark storage at 4 °C without pre-treatment. We therefore recommend storing samples refrigerated in the dark.
Statistical sampling strategies
International Nuclear Information System (INIS)
Andres, T.H.
1987-01-01
Systems assessment codes use mathematical models to simulate natural and engineered systems. Probabilistic systems assessment codes carry out multiple simulations to reveal the uncertainty in values of output variables due to uncertainty in the values of the model parameters. In this paper, methods are described for sampling sets of parameter values to be used in a probabilistic systems assessment code. Three Monte Carlo parameter selection methods are discussed: simple random sampling, Latin hypercube sampling, and sampling using two-level orthogonal arrays. Three post-selection transformations are also described: truncation, importance transformation, and discretization. Advantages and disadvantages of each method are summarized
Directory of Open Access Journals (Sweden)
Kai Yang
2016-01-01
Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.
Directory of Open Access Journals (Sweden)
Zhu Xinglin
2016-06-01
Full Text Available During radial–axial ring rolling process, cooperative strategy of the radial–axial feed is critical for dimensional accuracy and thermo mechanical parameters distribution of the formed ring. In order to improve the comprehensive quality of the ring parts, response surface method (RSM is employed for the first time to optimize the cooperative feed strategy for radial–axial ring rolling process by combining it with an improved and verified 3D coupled thermo-mechanical finite element model. The feed trajectory is put forward to describe cooperative relationship of the radial–axial feed and three variables are designed based on the feed trajectory. In order to achieve multi-objective optimization, four responses including thermo mechanical parameters distribution and rolling force are proposed. Based on the FEM results, RSM is used to establish a response model to depict the function relationship between the objective response and design variables. Through this approximate model, effects of different variables on ring rolling process are analyzed connectedly and optimal feed strategy is obtained by resorting to the optimal chart specific to a constraint condition.
Multi-objective optimization of cellular scanning strategy in selective laser melting
DEFF Research Database (Denmark)
Ahrari, Ali; Deb, Kalyanmoy; Mohanty, Sankhya
2017-01-01
The scanning strategy for selective laser melting - an additive manufacturing process - determines the temperature fields during the manufacturing process, which in turn affects residual stresses and distortions, two of the main sources of process-induced defects. The goal of this study is to dev......The scanning strategy for selective laser melting - an additive manufacturing process - determines the temperature fields during the manufacturing process, which in turn affects residual stresses and distortions, two of the main sources of process-induced defects. The goal of this study......, the problem is a combination of combinatorial and choice optimization, which makes the problem difficult to solve. On a process simulation domain consisting of 32 cells, our multi-objective evolutionary method is able to find a set of trade-off solutions for the defined conflicting objectives, which cannot...
Emergence of an optimal search strategy from a simple random walk.
Sakiyama, Tomoko; Gunji, Yukio-Pegio
2013-09-06
In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.
Li MingChu; Yang Zekun; Lu Kun; Guo Cheng
2017-01-01
The terrorist’s coordinated attack is becoming an increasing threat to western countries. By monitoring potential terrorists, security agencies are able to detect and destroy terrorist plots at their planning stage. Therefore, an optimal monitoring strategy for the domestic security agency becomes necessary. However, previous study about monitoring strategy generation fails to consider the information leakage, due to hackers and insider threat. Such leakage events may lead to failure of watch...
Optimization of multi-channel neutron focusing guides for extreme sample environments
International Nuclear Information System (INIS)
Di Julio, D D; Lelièvre-Berna, E; Andersen, K H; Bentley, P M; Courtois, P
2014-01-01
In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.
International Nuclear Information System (INIS)
Gu, Wei; Lu, Shuai; Wu, Zhi; Zhang, Xuesong; Zhou, Jinhui; Zhao, Bo; Wang, Jun
2017-01-01
Highlights: •A bilateral transaction mode for the residential CCHP microgrid is proposed. •An energy pricing strategy for the residential CCHP system is proposed. •A novel integrated demand response for the residential loads is proposed. •Two-stage operation optimization model for the CCHP microgrid is proposed. •Operations of typical days and annual scale of the CCHP microgrid are studied. -- Abstract: As the global energy crisis, environmental pollution, and global warming grow in intensity, increasing attention is being paid to combined cooling, heating, and power (CCHP) systems that realize high-efficiency cascade utilization of energy. This paper proposes a bilateral transaction mechanism between a residential CCHP system and a load aggregator (LA). The variable energy cost of the CCHP system is analyzed, based on which an energy pricing strategy for the CCHP system is proposed. Under this pricing strategy, the electricity price is constant, while the heat/cool price is ladder-shaped and dependent on the relationship between the electrical, heat, and cool loads. For the LA, an integrated demand response program is proposed that combines electricity-load shifting and a flexible heating/cooling supply, in which a thermodynamic model of buildings is used to determine the appropriate range of heating/cooling supply. Subsequently, a two-stage optimal dispatch model is proposed for the energy system that comprises the CCHP system and the LA. Case studies consisting of three scenarios (winter, summer, and excessive seasons) are delivered to demonstrate the effectiveness of the proposed approach, and the performance of the proposed pricing strategy is also evaluated by annual operation simulations.
Tang, Jiafu; Liu, Yang; Fung, Richard; Luo, Xinggang
2008-12-01
Manufacturers have a legal accountability to deal with industrial waste generated from their production processes in order to avoid pollution. Along with advances in waste recovery techniques, manufacturers may adopt various recycling strategies in dealing with industrial waste. With reuse strategies and technologies, byproducts or wastes will be returned to production processes in the iron and steel industry, and some waste can be recycled back to base material for reuse in other industries. This article focuses on a recovery strategies optimization problem for a typical class of industrial waste recycling process in order to maximize profit. There are multiple strategies for waste recycling available to generate multiple byproducts; these byproducts are then further transformed into several types of chemical products via different production patterns. A mixed integer programming model is developed to determine which recycling strategy and which production pattern should be selected with what quantity of chemical products corresponding to this strategy and pattern in order to yield maximum marginal profits. The sales profits of chemical products and the set-up costs of these strategies, patterns and operation costs of production are considered. A simulated annealing (SA) based heuristic algorithm is developed to solve the problem. Finally, an experiment is designed to verify the effectiveness and feasibility of the proposed method. By comparing a single strategy to multiple strategies in an example, it is shown that the total sales profit of chemical products can be increased by around 25% through the simultaneous use of multiple strategies. This illustrates the superiority of combinatorial multiple strategies. Furthermore, the effects of the model parameters on profit are discussed to help manufacturers organize their waste recycling network.
Optimal pacing strategy: from theoretical modelling to reality in 1500-m speed skating.
Hettinga, F J; De Koning, J J; Schmidt, L J I; Wind, N A C; Macintosh, B R; Foster, C
2011-01-01
Athletes are trained to choose the pace which is perceived to be correct during a specific effort, such as the 1500-m speed skating competition. The purpose of the present study was to "override" self-paced (SP) performance by instructing athletes to execute a theoretically optimal pacing profile. Seven national-level speed-skaters performed a SP 1500-m which was analysed by obtaining velocity (every 100 m) and body position (every 200 m) with video to calculate total mechanical power output. Together with gross efficiency and aerobic kinetics, obtained in separate trials, data were used to calculate aerobic and anaerobic power output profiles. An energy flow model was applied to SP, simulating a range of pacing strategies, and a theoretically optimal pacing profile was imposed in a second race (IM). Final time for IM was ∼2 s slower than SP. Total power distribution per lap differed, with a higher power over the first 300 m for IM (637.0 (49.4) vs 612.5 (50.0) W). Anaerobic parameters did not differ. The faster first lap resulted in a higher aerodynamic drag coefficient and perhaps a less effective push-off. Experienced athletes have a well-developed performance template, and changing pacing strategy towards a theoretically optimal fast start protocol had negative consequences on speed-skating technique and did not result in better performance.
DEFF Research Database (Denmark)
Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees
1993-01-01
In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... created with this method will reflect optimally the diversity of the languages of the world. On the basis of the internal structure of each genetic language tree a measure is computed that reflects the linguistic diversity in the language families represented by these trees. This measure is used...... to determine how many languages from each phylum should be selected, given any required sample size....
International Nuclear Information System (INIS)
Dumitrescu, Cosmin; Puzinauskas, Paulius V.; Agrawal, Ajay K.; Liu, Hao; Daly, Daniel T.
2009-01-01
Accurate chemical reaction mechanisms are critically needed to fully optimize combustion strategies for modern internal-combustion engines. These mechanisms are needed to predict emission formation and the chemical heat release characteristics for traditional direct-injection diesel as well as recently-developed and proposed variant combustion strategies. Experimental data acquired under conditions representative of such combustion strategies are required to validate these reaction mechanisms. This paper explores the feasibility of developing a fast sampling valve which extracts reactants at known locations in the spray reaction structure to provide these data. CHEMKIN software is used to establish the reaction timescales which dictate the required fast sampling capabilities. The sampling process is analyzed using separate FLUENT and CHEMKIN calculations. The non-reacting FLUENT CFD calculations give a quantitative estimate of the sample quantity as well as the fluid mixing and thermal history. A CHEMKIN reactor network has been created that reflects these mixing and thermal time scales and allows a theoretical evaluation of the quenching process
Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling
DEFF Research Database (Denmark)
Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper
2014-01-01
The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...
Jovanovic, Sasa; Savic, Slobodan; Jovicic, Nebojsa; Boskovic, Goran; Djordjevic, Zorica
2016-09-01
Multi-criteria decision making (MCDM) is a relatively new tool for decision makers who deal with numerous and often contradictory factors during their decision making process. This paper presents a procedure to choose the optimal municipal solid waste (MSW) management system for the area of the city of Kragujevac (Republic of Serbia) based on the MCDM method. Two methods of multiple attribute decision making, i.e. SAW (simple additive weighting method) and TOPSIS (technique for order preference by similarity to ideal solution), respectively, were used to compare the proposed waste management strategies (WMS). Each of the created strategies was simulated using the software package IWM2. Total values for eight chosen parameters were calculated for all the strategies. Contribution of each of the six waste treatment options was valorized. The SAW analysis was used to obtain the sum characteristics for all the waste management treatment strategies and they were ranked accordingly. The TOPSIS method was used to calculate the relative closeness factors to the ideal solution for all the alternatives. Then, the proposed strategies were ranked in form of tables and diagrams obtained based on both MCDM methods. As shown in this paper, the results were in good agreement, which additionally confirmed and facilitated the choice of the optimal MSW management strategy. © The Author(s) 2016.
Directory of Open Access Journals (Sweden)
Xueliang Huang
2013-01-01
Full Text Available As an important component of the smart grid, electric vehicles (EVs could be a good measure against energy shortages and environmental pollution. A main way of energy supply to EVs is to swap battery from the swap station. Based on the characteristics of EV battery swap station, the coordinated charging optimal control strategy is investigated to smooth the load fluctuation. Shuffled frog leaping algorithm (SFLA is an optimization method inspired by the memetic evolution of a group of frogs when seeking food. An improved shuffled frog leaping algorithm (ISFLA with the reflecting method to deal with the boundary constraint is proposed to obtain the solution of the optimal control strategy for coordinated charging. Based on the daily load of a certain area, the numerical simulations including the comparison of PSO and ISFLA are carried out and the results show that the presented ISFLA can effectively lower the peak-valley difference and smooth the load profile with the faster convergence rate and higher convergence precision.
Sadler, Georgia Robins; Lee, Hau-Chen; Seung-Hwan Lim, Rod; Fullerton, Judith
2011-01-01
Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author’s program of research are provided to demonstrate how adaptations of snowball sampling can be effectively used in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or subjects for research studies when recruitment of a population based sample is not essential. PMID:20727089
Directory of Open Access Journals (Sweden)
Shuai Su
2016-02-01
Full Text Available Increasing attention is being paid to the energy efficiency in metro systems to reduce the operational cost and to advocate the sustainability of railway systems. Classical research has studied the energy-efficient operational strategy and the energy-efficient system design separately to reduce the traction energy consumption. This paper aims to combine the operational strategies and the system design by analyzing how the infrastructure and vehicle parameters of metro systems influence the operational traction energy consumption. Firstly, a solution approach to the optimal train control model is introduced, which is used to design the Optimal Train Control Simulator(OTCS. Then, based on the OTCS, the performance of some important energy-efficient system design strategies is investigated to reduce the trains’ traction energy consumption, including reduction of the train mass, improvement of the kinematic resistance, the design of the energy-saving gradient, increasing the maximum traction and braking forces, introducing regenerative braking and timetable optimization. As for these energy-efficient strategies, the performances are finally evaluated using the OTCS with the practical operational data of the Beijing Yizhuang metro line. The proposed approach gives an example to quantitatively analyze the energy reduction of different strategies in the system design procedure, which may help the decision makers to have an overview of the energy-efficient performances and then to make decisions by balancing the costs and the benefits.
Paterakis, N.G.; Erdinç, O.; Bakirtzis, A.G.; Catalao, J.P.S.
2015-01-01
In this paper, a detailed home energy management system structure is developed to determine the optimal dayahead appliance scheduling of a smart household under hourly pricing and peak power-limiting (hard and soft power limitation)-based demand response strategies. All types of controllable assets
Directory of Open Access Journals (Sweden)
Xing Liu
2014-12-01
Full Text Available Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes.
Optimal waste-to-energy strategy assisted by GIS For sustainable solid waste management
International Nuclear Information System (INIS)
Tan, S T; Hashim, H; Lee, C T; Lim, J S; Kanniah, K D
2014-01-01
Municipal solid waste (MSW) management has become more complex and costly with the rapid socio-economic development and increased volume of waste. Planning a sustainable regional waste management strategy is a critical step for the decision maker. There is a great potential for MSW to be used for the generation of renewable energy through waste incineration or landfilling with gas capture system. However, due to high processing cost and cost of resource transportation and distribution throughout the waste collection station and power plant, MSW is mostly disposed in the landfill. This paper presents an optimization model incorporated with GIS data inputs for MSW management. The model can design the multi-period waste-to-energy (WTE) strategy to illustrate the economic potential and tradeoffs for MSW management under different scenarios. The model is capable of predicting the optimal generation, capacity, type of WTE conversion technology and location for the operation and construction of new WTE power plants to satisfy the increased energy demand by 2025 in the most profitable way. Iskandar Malaysia region was chosen as the model city for this study
Optimal waste-to-energy strategy assisted by GIS For sustainable solid waste management
Tan, S. T.; Hashim, H.
2014-02-01
Municipal solid waste (MSW) management has become more complex and costly with the rapid socio-economic development and increased volume of waste. Planning a sustainable regional waste management strategy is a critical step for the decision maker. There is a great potential for MSW to be used for the generation of renewable energy through waste incineration or landfilling with gas capture system. However, due to high processing cost and cost of resource transportation and distribution throughout the waste collection station and power plant, MSW is mostly disposed in the landfill. This paper presents an optimization model incorporated with GIS data inputs for MSW management. The model can design the multi-period waste-to-energy (WTE) strategy to illustrate the economic potential and tradeoffs for MSW management under different scenarios. The model is capable of predicting the optimal generation, capacity, type of WTE conversion technology and location for the operation and construction of new WTE power plants to satisfy the increased energy demand by 2025 in the most profitable way. Iskandar Malaysia region was chosen as the model city for this study.
Optimal sampling in damage detection of flexural beams by continuous wavelet transform
International Nuclear Information System (INIS)
Basu, B; Broderick, B M; Montanari, L; Spagnoli, A
2015-01-01
Modern measurement techniques are improving in capability to capture spatial displacement fields occurring in deformed structures with high precision and in a quasi-continuous manner. This in turn has made the use of vibration-based damage identification methods more effective and reliable for real applications. However, practical measurement and data processing issues still present barriers to the application of these methods in identifying several types of structural damage. This paper deals with spatial Continuous Wavelet Transform (CWT) damage identification methods in beam structures with the aim of addressing the following key questions: (i) can the cost of damage detection be reduced by down-sampling? (ii) what is the minimum number of sampling intervals required for optimal damage detection ? The first three free vibration modes of a cantilever and a simple supported beam with an edge open crack are numerically simulated. A thorough parametric study is carried out by taking into account the key parameters governing the problem, including level of noise, crack depth and location, mechanical and geometrical parameters of the beam. The results are employed to assess the optimal number of sampling intervals for effective damage detection. (paper)
Directory of Open Access Journals (Sweden)
Yuying Wang
2017-11-01
Full Text Available This paper presents an energy management strategy for plug-in hybrid electric vehicles (PHEVs that not only tries to minimize the energy consumption, but also considers the battery health. First, a battery model that can be applied to energy management optimization is given. In this model, battery health damage can be estimated in the different states of charge (SOC and temperature of the battery pack. Then, because of the inevitability that limiting the battery health degradation will increase energy consumption, a Pareto energy management optimization problem is formed. This multi-objective optimal control problem is solved numerically by using stochastic dynamic programming (SDP and particle swarm optimization (PSO for satisfying the vehicle power demand and considering the tradeoff between energy consumption and battery health at the same time. The optimization solution is obtained offline by utilizing real historical traffic data and formed as mappings on the system operating states so as to implement online in the actual driving conditions. Finally, the simulation results carried out on the GT-SUITE-based PHEV test platform are illustrated to demonstrate that the proposed multi-objective optimal control strategy would effectively yield benefits.
Energy Technology Data Exchange (ETDEWEB)
Zarepisheh, M; Li, R; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States); Ye, Y [Stanford Univ, Management Science and Engineering, Stanford, Ca (United States); Boyd, S [Stanford University, Electrical Engineering, Stanford, CA (United States)
2014-06-01
Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves
International Nuclear Information System (INIS)
Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S
2014-01-01
Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves
Sampling and analyte enrichment strategies for ambient mass spectrometry.
Li, Xianjiang; Ma, Wen; Li, Hongmei; Ai, Wanpeng; Bai, Yu; Liu, Huwei
2018-01-01
Ambient mass spectrometry provides great convenience for fast screening, and has showed promising potential in analytical chemistry. However, its relatively low sensitivity seriously restricts its practical utility in trace compound analysis. In this review, we summarize the sampling and analyte enrichment strategies coupled with nine modes of representative ambient mass spectrometry (desorption electrospray ionization, paper vhspray ionization, wooden-tip spray ionization, probe electrospray ionization, coated blade spray ionization, direct analysis in real time, desorption corona beam ionization, dielectric barrier discharge ionization, and atmospheric-pressure solids analysis probe) that have dramatically increased the detection sensitivity. We believe that these advances will promote routine use of ambient mass spectrometry. Graphical abstract Scheme of sampling stretagies for ambient mass spectrometry.
An optimal control strategy for collision avoidance of mobile robots in non-stationary environments
Kyriakopoulos, K. J.; Saridis, G. N.
1991-01-01
An optimal control formulation of the problem of collision avoidance of mobile robots in environments containing moving obstacles is presented. Collision avoidance is guaranteed if the minimum distance between the robot and the objects is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. Furthermore, time consistency with the nominal plan is desirable. A numerical solution of the optimization problem is obtained. Simulation results verify the value of the proposed strategy.
Different Optimal Control Strategies for Exploitation of Demand Response in the Smart Grid
DEFF Research Database (Denmark)
Zong, Yi; Bindner, Henrik W.; Gehrke, Oliver
2012-01-01
To achieve a Danish energy supply based on 100% renewable energy from combinations of wind, biomass, wave and solar power in 2050 and to cover 50% of the Danish electricity consumption by wind power in 2025, it requires coordinated management of large numbers of distributed and demand response...... resources, intermittent renewable energy resources in the Smart Grid. This paper presents different optimal control (Genetic Algorithm-based and Model Predictive Control-based) algorithms that schedule controlled loads in the industrial and residential sectors, based on dynamic price and weather forecast......, considering users’ comfort settings to meet an optimization objective, such as maximum profit or minimum energy consumption. It is demonstrated in this work that the GA-based and MPC-based optimal control strategies are able to achieve load shifting for grid reliability and energy savings, including demand...
Burger, J.M.S.; Hemerik, L.; Lenteren, van J.C.; Vet, L.E.M.
2004-01-01
We developed a dynamic state variable model for studying optimal host-handling strategies in the whitefly parasitoid Encarsia formosa Gahan (Hymenoptera: Aphelinidae). We assumed that (a) the function of host feeding is to gain nutrients that can be matured into eggs, (b) oogenesis is continuous and
Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.
2017-01-01
Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.
Energy Technology Data Exchange (ETDEWEB)
Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica
2013-01-15
Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)
HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation