WorldWideScience

Sample records for determine optimal models

  1. Determining of the Optimal Device Lifetime using Mathematical Renewal Models

    Directory of Open Access Journals (Sweden)

    Knežo Dušan

    2016-05-01

    Full Text Available Paper deals with the operations and equipment of the machine in the process of organizing production. During operation machines require maintenance and repairs, while in case of failure or machine wears it is necessary to replace them with new ones. For the process of replacement of old machines with new ones the term renewal is used. Qualitative aspects of the renewal process observe renewal theory, which is mainly based on the theory of probability and mathematical statistics. Devices lifetimes are closely related to the renewal of the devices. Presented article is focused on mathematical deduction of mathematical renewal models and determining optimal lifetime of the devices from the aspect of expenditures on renewal process.

  2. An Indirect Simulation-Optimization Model for Determining Optimal TMDL Allocation under Uncertainty

    Directory of Open Access Journals (Sweden)

    Feng Zhou

    2015-11-01

    Full Text Available An indirect simulation-optimization model framework with enhanced computational efficiency and risk-based decision-making capability was developed to determine optimal total maximum daily load (TMDL allocation under uncertainty. To convert the traditional direct simulation-optimization model into our indirect equivalent model framework, we proposed a two-step strategy: (1 application of interval regression equations derived by a Bayesian recursive regression tree (BRRT v2 algorithm, which approximates the original hydrodynamic and water-quality simulation models and accurately quantifies the inherent nonlinear relationship between nutrient load reductions and the credible interval of algal biomass with a given confidence interval; and (2 incorporation of the calibrated interval regression equations into an uncertain optimization framework, which is further converted to our indirect equivalent framework by the enhanced-interval linear programming (EILP method and provides approximate-optimal solutions at various risk levels. The proposed strategy was applied to the Swift Creek Reservoir’s nutrient TMDL allocation (Chesterfield County, VA to identify the minimum nutrient load allocations required from eight sub-watersheds to ensure compliance with user-specified chlorophyll criteria. Our results indicated that the BRRT-EILP model could identify critical sub-watersheds faster than the traditional one and requires lower reduction of nutrient loadings compared to traditional stochastic simulation and trial-and-error (TAE approaches. This suggests that our proposed framework performs better in optimal TMDL development compared to the traditional simulation-optimization models and provides extreme and non-extreme tradeoff analysis under uncertainty for risk-based decision making.

  3. Model for determining and optimizing delivery performance in industrial systems

    Directory of Open Access Journals (Sweden)

    Fechete Flavia

    2017-01-01

    Full Text Available Performance means achieving organizational objectives regardless of their nature and variety, and even overcoming them. Improving performance is one of the major goals of any company. Achieving the global performance means not only obtaining the economic performance, it is a must to take into account other functions like: function of quality, delivery, costs and even the employees satisfaction. This paper aims to improve the delivery performance of an industrial system due to their very low results. The delivery performance took into account all categories of performance indicators, such as on time delivery, backlog efficiency or transport efficiency. The research was focused on optimizing the delivery performance of the industrial system, using linear programming. Modeling the delivery function using linear programming led to obtaining precise quantities to be produced and delivered each month by the industrial system in order to minimize their transport cost, satisfying their customers orders and to control their stock. The optimization led to a substantial improvement in all four performance indicators that concern deliveries.

  4. Optimal moment determination in POME-copula based hydrometeorological dependence modelling

    Science.gov (United States)

    Liu, Dengfeng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi

    2017-07-01

    Copula has been commonly applied in multivariate modelling in various fields where marginal distribution inference is a key element. To develop a flexible, unbiased mathematical inference framework in hydrometeorological multivariate applications, the principle of maximum entropy (POME) is being increasingly coupled with copula. However, in previous POME-based studies, determination of optimal moment constraints has generally not been considered. The main contribution of this study is the determination of optimal moments for POME for developing a coupled optimal moment-POME-copula framework to model hydrometeorological multivariate events. In this framework, margins (marginals, or marginal distributions) are derived with the use of POME, subject to optimal moment constraints. Then, various candidate copulas are constructed according to the derived margins, and finally the most probable one is determined, based on goodness-of-fit statistics. This optimal moment-POME-copula framework is applied to model the dependence patterns of three types of hydrometeorological events: (i) single-site streamflow-water level; (ii) multi-site streamflow; and (iii) multi-site precipitation, with data collected from Yichang and Hankou in the Yangtze River basin, China. Results indicate that the optimal-moment POME is more accurate in margin fitting and the corresponding copulas reflect a good statistical performance in correlation simulation. Also, the derived copulas, capturing more patterns which traditional correlation coefficients cannot reflect, provide an efficient way in other applied scenarios concerning hydrometeorological multivariate modelling.

  5. Method to determine the optimal constitutive model from spherical indentation tests

    Directory of Open Access Journals (Sweden)

    Tairui Zhang

    2018-03-01

    Full Text Available The limitation of current indentation theories was investigated and a method to determine the optimal constitutive model through spherical indentation tests was proposed. Two constitutive models, the Power-law and the Linear-law, were used in Finite Element (FE calculations, and then a set of indentation governing equations was established for each model. The load-depth data from the normal indentation depth was used to fit the best parameters in each constitutive model while the data from the further loading part was compared with those from FE calculations, and the model that better predicted the further deformation was considered the optimal one. Moreover, a Yang’s modulus calculation model which took the previous plastic deformation and the phenomenon of pile-up (or sink-in into consideration was also proposed to revise the original Sneddon-Pharr-Oliver model. The indentation results on six materials, 304, 321, SA508, SA533, 15CrMoR, and Fv520B, were compared with tensile ones, which validated the reliability of the revised E calculation model and the optimal constitutive model determination method in this study. Keywords: Optimal constitutive model, Spherical indentation test, Finite Element calculations, Yang’s modulus

  6. HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.

    Science.gov (United States)

    Juusola, Jessie L; Brandeau, Margaret L

    2016-04-01

    To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.

  7. Use of multilevel modeling for determining optimal parameters of heat supply systems

    Science.gov (United States)

    Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.

    2017-07-01

    The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Siberian Branch, Russian Academy of Sciences). These methods enable us to determine optimal parameters of various types of piping systems due to flexible adaptability of the calculation procedure to intricate nonlinear mathematical models describing features of used equipment items and methods of their construction and operation. The new and most significant results achieved in developing methodological support and software for finding optimal parameters of complex heat supply systems are presented: a new procedure for solving the problem based on multilevel decomposition of a heat network model that makes it possible to proceed from the initial problem to a set of interrelated, less cumbersome subproblems with reduced dimensionality; a new algorithm implementing the method of multicircuit optimization and focused on the calculation of a hierarchical model of a heat supply system; the SOSNA software system for determining optimum parameters of intricate heat-supply systems and implementing the developed methodological foundation. The proposed procedure and algorithm enable us to solve engineering problems of finding the optimal parameters of multicircuit heat supply systems having large (real) dimensionality, and are applied in solving urgent problems related to the optimal development and reconstruction of these systems. The developed methodological foundation and software can be used for designing heat supply systems in the Central and the Admiralty regions in

  8. Method to determine the optimal constitutive model from spherical indentation tests

    Science.gov (United States)

    Zhang, Tairui; Wang, Shang; Wang, Weiqiang

    2018-03-01

    The limitation of current indentation theories was investigated and a method to determine the optimal constitutive model through spherical indentation tests was proposed. Two constitutive models, the Power-law and the Linear-law, were used in Finite Element (FE) calculations, and then a set of indentation governing equations was established for each model. The load-depth data from the normal indentation depth was used to fit the best parameters in each constitutive model while the data from the further loading part was compared with those from FE calculations, and the model that better predicted the further deformation was considered the optimal one. Moreover, a Yang's modulus calculation model which took the previous plastic deformation and the phenomenon of pile-up (or sink-in) into consideration was also proposed to revise the original Sneddon-Pharr-Oliver model. The indentation results on six materials, 304, 321, SA508, SA533, 15CrMoR, and Fv520B, were compared with tensile ones, which validated the reliability of the revised E calculation model and the optimal constitutive model determination method in this study.

  9. Determining Optimal Decision Version

    Directory of Open Access Journals (Sweden)

    Olga Ioana Amariei

    2014-06-01

    Full Text Available In this paper we start from the calculation of the product cost, applying the method of calculating the cost of hour- machine (THM, on each of the three cutting machines, namely: the cutting machine with plasma, the combined cutting machine (plasma and water jet and the cutting machine with a water jet. Following the calculation of cost and taking into account the precision of manufacturing of each machine, as well as the quality of the processed surface, the optimal decisional version needs to be determined regarding the product manufacturing. To determine the optimal decisional version, we resort firstly to calculating the optimal version on each criterion, and then overall using multiattribute decision methods.

  10. Determination of the optimal area of waste incineration in a rotary kiln using a simulation model.

    Science.gov (United States)

    Bujak, J

    2015-08-01

    The article presents a mathematical model to determine the flux of incinerated waste in terms of its calorific values. The model is applicable in waste incineration systems equipped with rotary kilns. It is based on the known and proven energy flux balances and equations that describe the specific losses of energy flux while considering the specificity of waste incineration systems. The model is universal as it can be used both for the analysis and testing of systems burning different types of waste (municipal, medical, animal, etc.) and for allowing the use of any kind of additional fuel. Types of waste incinerated and additional fuel are identified by a determination of their elemental composition. The computational model has been verified in three existing industrial-scale plants. Each system incinerated a different type of waste. Each waste type was selected in terms of a different calorific value. This allowed the full verification of the model. Therefore the model can be used to optimize the operation of waste incineration system both at the design stage and during its lifetime. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Determination of optimal reformer temperature in a reformed methanol fuel cell system using ANFIS models and numerical optimization methods

    DEFF Research Database (Denmark)

    Justesen, Kristian Kjær; Andreasen, Søren Juhl

    2015-01-01

    In this work a method for choosing the optimal reformer temperature for a reformed methanol fuel cell system is presented based on a case study of a H3 350 module produced by Serenergy A/S. The method is based on ANFIS models of the dependence of the reformer output gas composition on the reformer...... temperature and fuel flow, and the dependence of the fuel cell voltage on the fuel cell temperature, current and anode supply gas CO content. These models are combined to give a matrix of system efficiencies at different fuel cell currents and reformer temperatures. This matrix is then used to find...... the reformer temperature which gives the highest efficiency for each fuel cell current. The average of this optimal efficiency curve is 32.11% and the average efficiency achieved using the standard constant temperature is 30.64% an increase of 1.47 percentage points. The gain in efficiency is 4 percentage...

  12. A Novel Scheme for Optimal Control of a Nonlinear Delay Differential Equations Model to Determine Effective and Optimal Administrating Chemotherapy Agents in Breast Cancer.

    Science.gov (United States)

    Ramezanpour, H R; Setayeshi, S; Akbari, M E

    2011-01-01

    Determining the optimal and effective scheme for administrating the chemotherapy agents in breast cancer is the main goal of this scientific research. The most important issue here is the amount of drug or radiation administrated in chemotherapy and radiotherapy for increasing patient's survival. This is because in these cases, the therapy not only kills the tumor cells, but also kills some of the healthy tissues and causes serious damages. In this paper we investigate optimal drug scheduling effect for breast cancer model which consist of nonlinear ordinary differential time-delay equations. In this paper, a mathematical model of breast cancer tumors is discussed and then optimal control theory is applied to find out the optimal drug adjustment as an input control of system. Finally we use Sensitivity Approach (SA) to solve the optimal control problem. The goal of this paper is to determine optimal and effective scheme for administering the chemotherapy agent, so that the tumor is eradicated, while the immune systems remains above a suitable level. Simulation results confirm the effectiveness of our proposed procedure. In this paper a new scheme is proposed to design a therapy protocol for chemotherapy in Breast Cancer. In contrast to traditional pulse drug delivery, a continuous process is offered and optimized, according to the optimal control theory for time-delay systems.

  13. A model based on stochastic dynamic programming for determining China's optimal strategic petroleum reserve policy

    International Nuclear Information System (INIS)

    Zhang Xiaobing; Fan Ying; Wei Yiming

    2009-01-01

    China's Strategic Petroleum Reserve (SPR) is currently being prepared. But how large the optimal stockpile size for China should be, what the best acquisition strategies are, how to release the reserve if a disruption occurs, and other related issues still need to be studied in detail. In this paper, we develop a stochastic dynamic programming model based on a total potential cost function of establishing SPRs to evaluate the optimal SPR policy for China. Using this model, empirical results are presented for the optimal size of China's SPR and the best acquisition and drawdown strategies for a few specific cases. The results show that with comprehensive consideration, the optimal SPR size for China is around 320 million barrels. This size is equivalent to about 90 days of net oil import amount in 2006 and should be reached in the year 2017, three years earlier than the national goal, which implies that the need for China to fill the SPR is probably more pressing; the best stockpile release action in a disruption is related to the disruption levels and expected continuation probabilities. The information provided by the results will be useful for decision makers.

  14. Han's model parameters for microalgae grown under intermittent illumination: Determined using particle swarm optimization.

    Science.gov (United States)

    Pozzobon, Victor; Perre, Patrick

    2018-01-21

    This work provides a model and the associated set of parameters allowing for microalgae population growth computation under intermittent lightning. Han's model is coupled with a simple microalgae growth model to yield a relationship between illumination and population growth. The model parameters were obtained by fitting a dataset available in literature using Particle Swarm Optimization method. In their work, authors grew microalgae in excess of nutrients under flashing conditions. Light/dark cycles used for these experimentations are quite close to those found in photobioreactor, i.e. ranging from several seconds to one minute. In this work, in addition to producing the set of parameters, Particle Swarm Optimization robustness was assessed. To do so, two different swarm initialization techniques were used, i.e. uniform and random distribution throughout the search-space. Both yielded the same results. In addition, swarm distribution analysis reveals that the swarm converges to a unique minimum. Thus, the produced set of parameters can be trustfully used to link light intensity to population growth rate. Furthermore, the set is capable to describe photodamages effects on population growth. Hence, accounting for light overexposure effect on algal growth. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Parameters Determination of Yoshida Uemori Model Through Optimization Process of Cyclic Tension-Compression Test and V-Bending Springback

    Directory of Open Access Journals (Sweden)

    Serkan Toros

    Full Text Available Abstract In recent years, the studies on the enhancement of the prediction capability of the sheet metal forming simulations have increased remarkably. Among the used models in the finite element simulations, the yield criteria and hardening models have a great importance for the prediction of the formability and springback. The required model parameters are determined by using the several test results, i.e. tensile, compression, biaxial stretching tests (bulge test and cyclic tests (tension-compression. In this study, the Yoshida-Uemori (combined isotropic and kinematic hardening model is used to determine the performance of the springback prediction. The model parameters are determined by the optimization processes of the cyclic test by finite element simulations. However, in the study besides the cyclic tests, the model parameters are also evaluated by the optimization process of both cyclic and V-die bending simulations. The springback angle predictions with the model parameters obtained by the optimization of both cyclic and V-die bending simulations are found to mimic the experimental results in a better way than those obtained from only cyclic tests. However, the cyclic simulation results are found to be close enough to the experimental results.

  16. A network society communicative model for optimizing the Refugee Status Determination (RSD procedures

    Directory of Open Access Journals (Sweden)

    Andrea Pacheco Pacífico

    2013-01-01

    Full Text Available This article recommends a new way to improve Refugee Status Determination (RSD procedures by proposing a network society communicative model based on active involvement and dialogue among all implementing partners. This model, named after proposals from Castells, Habermas, Apel, Chimni, and Betts, would be mediated by the United Nations High Commissioner for Refugees (UNHCR, whose role would be modeled after that of the International Committee of the Red Cross (ICRC practice.

  17. Using an optimal CC-PLSR-RBFNN model and NIR spectroscopy for the starch content determination in corn

    Science.gov (United States)

    Jiang, Hao; Lu, Jiangang

    2018-05-01

    Corn starch is an important material which has been traditionally used in the fields of food and chemical industry. In order to enhance the rapidness and reliability of the determination for starch content in corn, a methodology is proposed in this work, using an optimal CC-PLSR-RBFNN calibration model and near-infrared (NIR) spectroscopy. The proposed model was developed based on the optimal selection of crucial parameters and the combination of correlation coefficient method (CC), partial least squares regression (PLSR) and radial basis function neural network (RBFNN). To test the performance of the model, a standard NIR spectroscopy data set was introduced, containing spectral information and chemical reference measurements of 80 corn samples. For comparison, several other models based on the identical data set were also briefly discussed. In this process, the root mean square error of prediction (RMSEP) and coefficient of determination (Rp2) in the prediction set were used to make evaluations. As a result, the proposed model presented the best predictive performance with the smallest RMSEP (0.0497%) and the highest Rp2 (0.9968). Therefore, the proposed method combining NIR spectroscopy with the optimal CC-PLSR-RBFNN model can be helpful to determine starch content in corn.

  18. DETERMINATION ALGORITHM OF OPTIMAL GEOMETRICAL PARAMETERS FOR COMPONENTS OF FREIGHT CARS ON THE BASIS OF GENERALIZED MATHEMATICAL MODELS

    Directory of Open Access Journals (Sweden)

    O. V. Fomin

    2013-10-01

    Full Text Available Purpose. Presentation of features and example of the use of the offered determination algorithm of optimum geometrical parameters for the components of freight cars on the basis of the generalized mathematical models, which is realized using computer. Methodology. The developed approach to search for optimal geometrical parameters can be described as the determination of optimal decision of the selected set of possible variants. Findings. The presented application example of the offered algorithm proved its operation capacity and efficiency of use. Originality. The determination procedure of optimal geometrical parameters for freight car components on the basis of the generalized mathematical models was formalized in the paper. Practical value. Practical introduction of the research results for universal open cars allows one to reduce container of their design and accordingly to increase the carrying capacity almost by100 kg with the improvement of strength characteristics. Taking into account the mass of their park this will provide a considerable economic effect when producing and operating. The offered approach is oriented to the distribution of the software packages (for example Microsoft Excel, which are used by technical services of the most enterprises, and does not require additional capital investments (acquisitions of the specialized programs and proper technical staff training. This proves the correctness of the research direction. The offered algorithm can be used for the solution of other optimization tasks on the basis of the generalized mathematical models.

  19. Optimization modeling with spreadsheets

    CERN Document Server

    Baker, Kenneth R

    2015-01-01

    An accessible introduction to optimization analysis using spreadsheets Updated and revised, Optimization Modeling with Spreadsheets, Third Edition emphasizes model building skills in optimization analysis. By emphasizing both spreadsheet modeling and optimization tools in the freely available Microsoft® Office Excel® Solver, the book illustrates how to find solutions to real-world optimization problems without needing additional specialized software. The Third Edition includes many practical applications of optimization models as well as a systematic framework that il

  20. Assessing the accuracy of subject-specific, muscle-model parameters determined by optimizing to match isometric strength.

    Science.gov (United States)

    DeSmitt, Holly J; Domire, Zachary J

    2016-12-01

    Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.

  1. The Potential for Zinc Stable Isotope Techniques and Modelling to Determine Optimal Zinc Supplementation

    Directory of Open Access Journals (Sweden)

    Cuong D. Tran

    2015-05-01

    Full Text Available It is well recognised that zinc deficiency is a major global public health issue, particularly in young children in low-income countries with diarrhoea and environmental enteropathy. Zinc supplementation is regarded as a powerful tool to correct zinc deficiency as well as to treat a variety of physiologic and pathologic conditions. However, the dose and frequency of its use as well as the choice of zinc salt are not clearly defined regardless of whether it is used to treat a disease or correct a nutritional deficiency. We discuss the application of zinc stable isotope tracer techniques to assess zinc physiology, metabolism and homeostasis and how these can address knowledge gaps in zinc supplementation pharmacokinetics. This may help to resolve optimal dose, frequency, length of administration, timing of delivery to food intake and choice of zinc compound. It appears that long-term preventive supplementation can be administered much less frequently than daily but more research needs to be undertaken to better understand how best to intervene with zinc in children at risk of zinc deficiency. Stable isotope techniques, linked with saturation response and compartmental modelling, also have the potential to assist in the continued search for simple markers of zinc status in health, malnutrition and disease.

  2. The Potential for Zinc Stable Isotope Techniques and Modelling to Determine Optimal Zinc Supplementation

    Science.gov (United States)

    Tran, Cuong D.; Gopalsamy, Geetha L.; Mortimer, Elissa K.; Young, Graeme P.

    2015-01-01

    It is well recognised that zinc deficiency is a major global public health issue, particularly in young children in low-income countries with diarrhoea and environmental enteropathy. Zinc supplementation is regarded as a powerful tool to correct zinc deficiency as well as to treat a variety of physiologic and pathologic conditions. However, the dose and frequency of its use as well as the choice of zinc salt are not clearly defined regardless of whether it is used to treat a disease or correct a nutritional deficiency. We discuss the application of zinc stable isotope tracer techniques to assess zinc physiology, metabolism and homeostasis and how these can address knowledge gaps in zinc supplementation pharmacokinetics. This may help to resolve optimal dose, frequency, length of administration, timing of delivery to food intake and choice of zinc compound. It appears that long-term preventive supplementation can be administered much less frequently than daily but more research needs to be undertaken to better understand how best to intervene with zinc in children at risk of zinc deficiency. Stable isotope techniques, linked with saturation response and compartmental modelling, also have the potential to assist in the continued search for simple markers of zinc status in health, malnutrition and disease. PMID:26035248

  3. Determining the optimal screening interval for type 2 diabetes mellitus using a risk prediction model.

    Directory of Open Access Journals (Sweden)

    Andrei Brateanu

    Full Text Available Progression to diabetes mellitus (DM is variable and the screening time interval not well defined. The American Diabetes Association and US Preventive Services Task Force suggest screening every 3 years, but evidence is limited. The objective of the study was to develop a model to predict the probability of developing DM and suggest a risk-based screening interval.We included non-diabetic adult patients screened for DM in the Cleveland Clinic Health System if they had at least two measurements of glycated hemoglobin (HbA1c, an initial one less than 6.5% (48 mmol/mol in 2008, and another between January, 2009 and December, 2013. Cox proportional hazards models were created. The primary outcome was DM defined as HbA1C greater than 6.4% (46 mmol/mol. The optimal rescreening interval was chosen based on the predicted probability of developing DM.Of 5084 participants, 100 (4.4% of the 2281 patients with normal HbA1c and 772 (27.5% of the 2803 patients with prediabetes developed DM within 5 years. Factors associated with developing DM included HbA1c (HR per 0.1 units increase 1.20; 95%CI, 1.13-1.27, family history (HR 1.31; 95%CI, 1.13-1.51, smoking (HR 1.18; 95%CI, 1.03-1.35, triglycerides (HR 1.01; 95%CI, 1.00-1.03, alanine aminotransferase (HR 1.07; 95%CI, 1.03-1.11, body mass index (HR 1.06; 95%CI, 1.01-1.11, age (HR 0.95; 95%CI, 0.91-0.99 and high-density lipoproteins (HR 0.93; 95% CI, 0.90-0.95. Five percent of patients in the highest risk tertile developed DM within 8 months, while it took 35 months for 5% of the middle tertile to develop DM. Only 2.4% percent of the patients in the lowest tertile developed DM within 5 years.A risk prediction model employing commonly available data can be used to guide screening intervals. Based on equal intervals for equal risk, patients in the highest risk category could be rescreened after 8 months, while those in the intermediate and lowest risk categories could be rescreened after 3 and 5 years

  4. A scientific model to determine the optimal radiographer staffing component in a nuclear medicine department

    International Nuclear Information System (INIS)

    Shipanga, A.N.; Ellmann, A.

    2004-01-01

    Full text: Introduction: Nuclear medicine in South Africa is developing fast. Much has changed since the constitution of a scientific model for determining an optimum number of radiographer posts in a Nuclear Medicine department in the late 1980's. Aim: The aim of this study was to ascertain whether the number of radiographers required by a Nuclear Medicine department can still be determined according to the norms established in 1988. Methods: A quantitative study using non-experimental evaluation design was conducted to determine the ratios between current radiographer workload and staffing norms. The workload ratios were analysed using the procedures statistics of the Nuclear Medicine department at Tygerberg Hospital. Radiographers provided data about their activities related to patient procedures, including information about the condition of the patients, activities in the radiopharmaceutical laboratory, and patient related administrative tasks. These were factored into an equation relating this data to working hours, including vacation and sick leave. The calculation of Activity Standards and an annual Standard Workload was used to finally calculate the staffing requirements for a Nuclear Medicine department. Results: Preliminary data confirmed that old staffing norms cannot be used in a modern Nuclear Medicine department. Protocols for several types of study have changed, including the additional acquisition of tomographic studies. Interest in the use of time-consuming non-imaging studies has been revived and should be factored Into the equation. Conclusions: All Nuclear Medicine departments In South Africa, where the types of studies performed have changed over the past years, should look carefully at their radiographer staffing ratio to ascertain whether the number of radiographers needed is adequate for the current workload. (author)

  5. Development of a System Model for Determining Optimal Personnel Interaction Strategy in a Production Industry

    Directory of Open Access Journals (Sweden)

    B. Kareem

    2017-03-01

    Full Text Available Manufacturing organizations have become more complex in recent time as a result of technological advances. Communication among production workers operating in an environment marked by increased organizational complexity may require planning for the economically appropriate selection of network channels/media with enhanced productivity. This paper examines traditional and modern communication channels (media and their comparative advantages over one another in their adoption in manufacturing organizations. In this framework, six media (human messengers, mobile-phones, intranet, fixed-internet, mobile-internet, and private branch exchange [PBX] phone systems were subjected to analyses using five identified network patterns (all-channel, chain, Y, wheel, and circle of interactions in manufacturing organizations. Costs, benefits, and the utility of the channels were integrated into the model and utilized to determine the most sustainable media that could enhance productivity in industry. The developed model was implemented using expert data/information collected from the plastic production industry. The results of an availability assessment showed that the enhancement of productivity could be fully achieved by utilizing mobile phones and internet networks, but when considering overall utility, only mobile phones could bring about the desired productivity with 0.59 probability. The findings suggest that the system developed is robust in revealing how productivity might be affected by means of communication among industrial workers.

  6. Determination of the optimal dose reduction level via iterative reconstruction using 640-slice volume chest CT in a pig model.

    Directory of Open Access Journals (Sweden)

    Xingli Liu

    Full Text Available To determine the optimal dose reduction level of iterative reconstruction technique for paediatric chest CT in pig models.27 infant pigs underwent 640-slice volume chest CT with 80kVp and different mAs. Automatic exposure control technique was used, and the index of noise was set to SD10 (Group A, routine dose, SD12.5, SD15, SD17.5, SD20 (Groups from B to E to reduce dose respectively. Group A was reconstructed with filtered back projection (FBP, and Groups from B to E were reconstructed using iterative reconstruction (IR. Objective and subjective image quality (IQ among groups were compared to determine an optimal radiation reduction level.The noise and signal-to-noise ratio (SNR in Group D had no significant statistical difference from that in Group A (P = 1.0. The scores of subjective IQ in Group A were not significantly different from those in Group D (P>0.05. There were no obvious statistical differences in the objective and subjective index values among the subgroups (small, medium and large subgroups of Group D. The effective dose (ED of Group D was 58.9% lower than that of Group A (0.20±0.05mSv vs 0.48±0.10mSv, p <0.001.In infant pig chest CT, using iterative reconstruction can provide diagnostic image quality; furthermore, it can reduce the dosage by 58.9%.

  7. DATA MINING METHODOLOGY FOR DETERMINING THE OPTIMAL MODEL OF COST PREDICTION IN SHIP INTERIM PRODUCT ASSEMBLY

    Directory of Open Access Journals (Sweden)

    Damir Kolich

    2016-03-01

    Full Text Available In order to accurately predict costs of the thousands of interim products that are assembled in shipyards, it is necessary to use skilled engineers to develop detailed Gantt charts for each interim product separately which takes many hours. It is helpful to develop a prediction tool to estimate the cost of interim products accurately and quickly without the need for skilled engineers. This will drive down shipyard costs and improve competitiveness. Data mining is used extensively for developing prediction models in other industries. Since ships consist of thousands of interim products, it is logical to develop a data mining methodology for a shipyard or any other manufacturing industry where interim products are produced. The methodology involves analysis of existing interim products and data collection. Pre-processing and principal component analysis is done to make the data “user-friendly” for later prediction processing and the development of both accurate and robust models. The support vector machine is demonstrated as the better model when there are a lower number of tuples. However as the number of tuples is increased to over 10000, then the artificial neural network model is recommended.

  8. Modeling and analysis for determining optimal suppliers under stochastic lead times

    DEFF Research Database (Denmark)

    Abginehchi, Soheil; Farahani, Reza Zanjirani

    2010-01-01

    systems. The item acquisition lead times of suppliers are random variables. Backorder is allowed and shortage cost is charged based on not only per unit in shortage but also per time unit. Continuous review (s,Q) policy has been assumed. When the inventory level depletes to a reorder level, the total...... order is split among n suppliers. Since the suppliers have different characteristics, the quantity ordered to different suppliers may be different. The problem is to determine the reorder level and quantity ordered to each supplier so that the expected total cost per time unit, including ordering cost...

  9. Optimization Modeling with Spreadsheets

    CERN Document Server

    Baker, Kenneth R

    2011-01-01

    This introductory book on optimization (mathematical programming) includes coverage on linear programming, nonlinear programming, integer programming and heuristic programming; as well as an emphasis on model building using Excel and Solver.  The emphasis on model building (rather than algorithms) is one of the features that makes this book distinctive. Most books devote more space to algorithmic details than to formulation principles. These days, however, it is not necessary to know a great deal about algorithms in order to apply optimization tools, especially when relying on the sp

  10. Subthreshold SPICE Model Optimization

    Science.gov (United States)

    Lum, Gregory; Au, Henry; Neff, Joseph; Bozeman, Eric; Kamin, Nick; Shimabukuro, Randy

    2011-04-01

    The first step in integrated circuit design is the simulation of said design in software to verify proper functionally and design requirements. Properties of the process are provided by fabrication foundries in the form of SPICE models. These SPICE models contain the electrical data and physical properties of the basic circuit elements. A limitation of these models is that the data collected by the foundry only accurately model the saturation region. This is fine for most users, but when operating devices in the subthreshold region they are inadequate for accurate simulation results. This is why optimizing the current SPICE models to characterize the subthreshold region is so important. In order to accurately simulate this region of operation, MOSFETs of varying widths and lengths are fabricated and the electrical test data is collected. From the data collected the parameters of the model files are optimized through parameter extraction rather than curve fitting. With the completed optimized models the circuit designer is able to simulate circuit designs for the sub threshold region accurately.

  11. Comparison of adsorption equilibrium and kinetic models for a case study of pharmaceutical active ingredient adsorption from fermentation broths: parameter determination, simulation, sensitivity analysis and optimization

    Directory of Open Access Journals (Sweden)

    B. Likozar

    2012-09-01

    Full Text Available Mathematical models for a batch process were developed to predict concentration distributions for an active ingredient (vancomycin adsorption on a representative hydrophobic-molecule adsorbent, using differently diluted crude fermentation broth with cells as the feedstock. The kinetic parameters were estimated using the maximization of the coefficient of determination by a heuristic algorithm. The parameters were estimated for each fermentation broth concentration using four concentration distributions at initial vancomycin concentrations of 4.96, 1.17, 2.78, and 5.54 g l−¹. In sequence, the models and their parameters were validated for fermentation broth concentrations of 0, 20, 50, and 100% (v/v by calculating the coefficient of determination for each concentration distribution at the corresponding initial concentration. The applicability of the validated models for process optimization was investigated by using the models as process simulators to optimize the two process efficiencies.

  12. Optimal Path Determination for Flying Vehicle to Search an Object

    Science.gov (United States)

    Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.

    2018-01-01

    In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.

  13. Mathematical programming (MP) model to determine optimal transportation infrastructure for geologic CO2 storage in the Illinois basin

    Science.gov (United States)

    Rehmer, Donald E.

    Analysis of results from a mathematical programming model were examined to 1) determine the least cost options for infrastructure development of geologic storage of CO2 in the Illinois Basin, and 2) perform an analysis of a number of CO2 emission tax and oil price scenarios in order to implement development of the least-cost pipeline networks for distribution of CO2. The model, using mixed integer programming, tested the hypothesis of whether viable EOR sequestration sites can serve as nodal points or hubs to expand the CO2 delivery infrastructure to more distal locations from the emissions sources. This is in contrast to previous model results based on a point-to- point model having direct pipeline segments from each CO2 capture site to each storage sink. There is literature on the spoke and hub problem that relates to airline scheduling as well as maritime shipping. A large-scale ship assignment problem that utilized integer linear programming was run on Excel Solver and described by Mourao et al., (2001). Other literature indicates that aircraft assignment in spoke and hub routes can also be achieved using integer linear programming (Daskin and Panayotopoulos, 1989; Hane et al., 1995). The distribution concept is basically the reverse of the "tree and branch" type (Rothfarb et al., 1970) gathering systems for oil and natural gas that industry has been developing for decades. Model results indicate that the inclusion of hubs as variables in the model yields lower transportation costs for geologic carbon dioxide storage over previous models of point-to-point infrastructure geometries. Tabular results and GIS maps of the selected scenarios illustrate that EOR sites can serve as nodal points or hubs for distribution of CO2 to distal oil field locations as well as deeper saline reservoirs. Revenue amounts and capture percentages both show an improvement over solutions when the hubs are not allowed to come into the solution. Other results indicate that geologic

  14. Determination of the Prosumer's Optimal Bids

    Science.gov (United States)

    Ferruzzi, Gabriella; Rossi, Federico; Russo, Angela

    2015-12-01

    This paper considers a microgrid connected with a medium-voltage (MV) distribution network. It is assumed that the microgrid, which is managed by a prosumer, operates in a competitive environment and participates in the day-ahead market. Then, as the first step of the short-term management problem, the prosumer must determine the bids to be submitted to the market. The offer strategy is based on the application of an optimization model, which is solved for different hourly price profiles of energy exchanged with the main grid. The proposed procedure is applied to a microgrid and four different its configurations were analyzed. The configurations consider the presence of thermoelectric units that only produce electricity, a boiler or/and cogeneration power plants for the thermal loads, and an electric storage system. The numerical results confirmed the numerous theoretical considerations that have been made.

  15. Modeling investor optimism with fuzzy connectives

    NARCIS (Netherlands)

    Lovric, M.; Almeida, R.J.; Kaymak, U.; Spronk, J.; Carvalho, J.P.; Dubois, D.; Kaymak, U.; Sousa, J.M.C.

    2009-01-01

    Optimism or pessimism of investors is one of the important characteristics that determine the investment behavior in financial markets. In this paper, we propose a model of investor optimism based on a fuzzy connective. The advantage of the proposed approach is that the influence of different levels

  16. Determining an optimal supply chain strategy

    Directory of Open Access Journals (Sweden)

    Intaher M. Ambe

    2012-11-01

    Full Text Available In today’s business environment, many companies want to become efficient and flexible, but have struggled, in part, because they have not been able to formulate optimal supply chain strategies. Often this is as a result of insufficient knowledge about the costs involved in maintaining supply chains and the impact of the supply chain on their operations. Hence, these companies find it difficult to manufacture at a competitive cost and respond quickly and reliably to market demand. Mismatched strategies are the root cause of the problems that plague supply chains, and supply-chain strategies based on a one-size-fits-all strategy often fail. The purpose of this article is to suggest instruments to determine an optimal supply chain strategy. This article, which is conceptual in nature, provides a review of current supply chain strategies and suggests a framework for determining an optimal strategy.

  17. A Model for Determining Optimal Governance Structure in DoD Acquisition Projects in a Performance-Based Environment

    Science.gov (United States)

    2010-04-30

    combating market dynamism (Aldrich, 1979; Child, 1972), which is a result of evolving technology, shifting prices, or variance in product availability...primary determinant of behavior (Baron & Hannan, 1994). Concepts such as trust play a prominent role in network explanations (Achrol & Kotler , 1999). 4...and that governance relies on combinations of market , social, and/or authority-based mechanisms more than any one of these exclusively. • In a

  18. Risk modelling in portfolio optimization

    Science.gov (United States)

    Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-09-01

    Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.

  19. Model Risk in Portfolio Optimization

    Directory of Open Access Journals (Sweden)

    David Stefanovits

    2014-08-01

    Full Text Available We consider a one-period portfolio optimization problem under model uncertainty. For this purpose, we introduce a measure of model risk. We derive analytical results for this measure of model risk in the mean-variance problem assuming we have observations drawn from a normal variance mixture model. This model allows for heavy tails, tail dependence and leptokurtosis of marginals. The results show that mean-variance optimization is seriously compromised by model uncertainty, in particular, for non-Gaussian data and small sample sizes. To mitigate these shortcomings, we propose a method to adjust the sample covariance matrix in order to reduce model risk.

  20. An optimization model for metabolic pathways.

    Science.gov (United States)

    Planes, F J; Beasley, J E

    2009-10-15

    Different mathematical methods have emerged in the post-genomic era to determine metabolic pathways. These methods can be divided into stoichiometric methods and path finding methods. In this paper we detail a novel optimization model, based upon integer linear programming, to determine metabolic pathways. Our model links reaction stoichiometry with path finding in a single approach. We test the ability of our model to determine 40 annotated Escherichia coli metabolic pathways. We show that our model is able to determine 36 of these 40 pathways in a computationally effective manner.

  1. A methodology for determining optimal durations for the use of contaminated crops as fodder following a nuclear accident using a dynamic food-chain model

    International Nuclear Information System (INIS)

    Hwang, Won Tae; Han, Moon Hee; Cho, Gyuseong

    2000-01-01

    A methodology for determining optimal durations for the use of contaminated crops as fodder was designed based on cost-benefit analysis. Illustrative results of the application of this methodology to pigs are presented for the hypothetical deposition of radionuclides on August 15 when a number of crops are fully developed in Korean agricultural conditions. For investigating the appropriateness of the use of contaminated crops as fodder, the net benefit from this action was compared with the imposition of a ban on human consumption of contaminated crops without alternative use. The time-dependent radionuclide concentrations in crops and pork after the deposition event were predicted from a dynamic food-chain model DYNACON. The net benefit from the actions was quantitatively evaluated in terms of cost equivalent of the doses incurred or averted and the monetary costs needed to implement the action. The optimal duration for the use of contaminated crops as fodder depended on a number of factors such as radionuclide, variety of crops fed as fodder and duration of the action. Such action was more cost effective for 137 Cs deposition than for 90 Sr or 131 I deposition. The use of contaminated crops as fodder can be an effective response to a public reluctance to consume contaminated crops

  2. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set

    Directory of Open Access Journals (Sweden)

    Jinshui Zhang

    2017-04-01

    Full Text Available This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD, to determine optimal parameters for support vector data description (SVDD model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient (C and kernel width (s, in mapping homogeneous specific land cover.

  3. The determination of optimal climate policy

    International Nuclear Information System (INIS)

    Aaheim, Asbjoern

    2010-01-01

    Analyses of the costs and benefits of climate policy, such as the Stern Review, evaluate alternative strategies to reduce greenhouse gas emissions by requiring that the cost of emission cuts in each and every year has to be covered by the associated value of avoided damage, discounted by a an exogenously chosen rate. An alternative is to optimize abatement programmes towards a stationary state, where the concentrations of greenhouse gases are stabilized and shadow prices, including the rate of discount, are determined endogenously. This paper examines the properties of optimized stabilization. It turns out that the implications for the evaluation of climate policy are substantial if compared with evaluations of the present value of costs and benefits based on exogenously chosen shadow prices. Comparisons of discounted costs and benefits tend to exaggerate the importance of the choice of discount rate, while ignoring the importance of future abatement costs, which turns out to be essential for the optimal abatement path. Numerical examples suggest that early action may be more beneficial than indicated by comparisons of costs and benefits discounted by a rate chosen on the basis of current observations. (author)

  4. Enhanced index tracking modelling in portfolio optimization

    Science.gov (United States)

    Lam, W. S.; Hj. Jaaman, Saiful Hafizah; Ismail, Hamizun bin

    2013-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. It is a dual-objective optimization problem, a trade-off between maximizing the mean return and minimizing the risk. Enhanced index tracking aims to generate excess return over the return achieved by the index without purchasing all of the stocks that make up the index by establishing an optimal portfolio. The objective of this study is to determine the optimal portfolio composition and performance by using weighted model in enhanced index tracking. Weighted model focuses on the trade-off between the excess return and the risk. The results of this study show that the optimal portfolio for the weighted model is able to outperform the Malaysia market index which is Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  5. Pyomo optimization modeling in Python

    CERN Document Server

    Hart, William E; Watson, Jean-Paul; Woodruff, David L; Hackebeil, Gabriel A; Nicholson, Bethany L; Siirola, John D

    2017-01-01

    This book provides a complete and comprehensive guide to Pyomo (Python Optimization Modeling Objects) for beginning and advanced modelers, including students at the undergraduate and graduate levels, academic researchers, and practitioners. Using many examples to illustrate the different techniques useful for formulating models, this text beautifully elucidates the breadth of modeling capabilities that are supported by Pyomo and its handling of complex real-world applications. This second edition provides an expanded presentation of Pyomo’s modeling capabilities, providing a broader description of the software that will enable the user to develop and optimize models. Introductory chapters have been revised to extend tutorials; chapters that discuss advanced features now include the new functionalities added to Pyomo since the first edition including generalized disjunctive programming, mathematical programming with equilibrium constraints, and bilevel programming. Pyomo is an open source software package fo...

  6. Determination of absorption changes from moments of distributions of times of flight of photons: optimization of measurement conditions for a two-layered tissue model.

    Science.gov (United States)

    Liebert, Adam; Wabnitz, Heidrun; Elster, Clemens

    2012-05-01

    Time-resolved near-infrared spectroscopy allows for depth-selective determination of absorption changes in the adult human head that facilitates separation between cerebral and extra-cerebral responses to brain activation. The aim of the present work is to analyze which combinations of moments of measured distributions of times of flight (DTOF) of photons and source-detector separations are optimal for the reconstruction of absorption changes in a two-layered tissue model corresponding to extra- and intra-cerebral compartments. To this end we calculated the standard deviations of the derived absorption changes in both layers by considering photon noise and a linear relation between the absorption changes and the DTOF moments. The results show that the standard deviation of the absorption change in the deeper (superficial) layer increases (decreases) with the thickness of the superficial layer. It is confirmed that for the deeper layer the use of higher moments, in particular the variance of the DTOF, leads to an improvement. For example, when measurements at four different source-detector separations between 8 and 35 mm are available and a realistic thickness of the upper layer of 12 mm is assumed, the inclusion of the change in mean time of flight, in addition to the change in attenuation, leads to a reduction of the standard deviation of the absorption change in the deeper tissue layer by a factor of 2.5. A reduction by another 4% can be achieved by additionally including the change in variance.

  7. Venturi scrubber modelling and optimization

    Energy Technology Data Exchange (ETDEWEB)

    Viswanathan, S [National Univ., La Jolla, CA (United States). School of Engineering and Technology; Ananthanarayanan, N.V. [National Univ. of Singapore (Singapore). Dept. of Chemical and Environmental Engineering; Azzopardi, B.J. [Nottingham Univ., Nottingham (United Kingdom). Dept. of Chemical Engineering

    2005-04-01

    This study presented a method to maintain the efficiency of venturi scrubbers in removing fine particulates during gas clean operations while minimizing pressure drop. Venturi scrubbers meet stringent emission standards. In order to choose the optimal method for predicting pressure drop, 4 established models were compared for their accuracy of prediction and simplicity in application. The enhanced algorithm optimizes Pease-Anthony type venturi scrubber performance by predicting the minimum pressure drop required to achieve the desired collection efficiency. This was accomplished by optimizing the key operating and design parameters such as liquid-to-gas ratio, throat gas velocity, number of nozzles, nozzle diameter and throat aspect ratio. Two of the 4 established models were expanded by providing an empirical algorithm to better predict pressure drop in the venturi throat. Model results were validated with experimental data. The optimization algorithm considers the non-uniformity in liquid distribution. It can be applied to cylindrical and rectangular Pease-Anthony type scrubbers. It offers an effective, systematic and accurate method to optimize the performance of new and existing scrubbers. 54 refs., 5 figs.

  8. Energy group structure determination using particle swarm optimization

    International Nuclear Information System (INIS)

    Yi, Ce; Sjoden, Glenn

    2013-01-01

    Highlights: ► Particle swarm optimization is applied to determine broad group structure. ► A graph representation of the broad group structure problem is introduced. ► The approach is tested on a fuel-pin model. - Abstract: Multi-group theory is widely applied for the energy domain discretization when solving the Linear Boltzmann Equation. To reduce the computational cost, fine group cross libraries are often down-sampled into broad group cross section libraries. Cross section data collapsing generally involves two steps: Firstly, the broad group structure has to be determined; secondly, a weighting scheme is used to evaluate the broad cross section library based on the fine group cross section data and the broad group structure. A common scheme is to average the fine group cross section weighted by the fine group flux. Cross section collapsing techniques have been intensively researched. However, most studies use a pre-determined group structure, open based on experience, to divide the neutron energy spectrum into thermal, epi-thermal, fast, etc. energy range. In this paper, a swarm intelligence algorithm, particle swarm optimization (PSO), is applied to optimize the broad group structure. A graph representation of the broad group structure determination problem is introduced. And the swarm intelligence algorithm is used to solve the graph model. The effectiveness of the approach is demonstrated using a fuel-pin model

  9. Parameter optimization for surface flux transport models

    Science.gov (United States)

    Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.

    2017-11-01

    Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.

  10. Optimal Strategy and Business Models

    DEFF Research Database (Denmark)

    Johnson, Peter; Foss, Nicolai Juul

    2016-01-01

    This study picks up on earlier suggestions that control theory may further the study of strategy. Strategy can be formally interpreted as an idealized path optimizing heterogeneous resource deployment to produce maximum financial gain. Using standard matrix methods to describe the firm Hamiltonia...... variable of firm path, suggesting in turn that the firm's business model is the codification of the application of investment resources used to control the strategic path of value realization....

  11. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  12. Modeling of Salivary Production Recovery After Radiotherapy Using Mixed Models: Determination of Optimal Dose Constraint for IMRT Planning and Construction of Convenient Tools to Predict Salivary Function

    International Nuclear Information System (INIS)

    Ortholan, Cecile; Chamorey, Emmanuel Phar; Benezery, Karen; Thariat, Juliette; Dassonville, Olivier; Poissonnet, Gilles; Bozec, Alexandre; Follana, Philippe; Peyrade, Frederique; Sudaka, Anne; Gerard, Jean Pierre; Bensadoun, Rene Jean

    2009-01-01

    Purpose: The mathematical relationship between the dose to the parotid glands and salivary gland production needs to be elucidated. This study, which included data from patients included in a French prospective study assessing the benefit of intensity-modulated radiotherapy (RT), sought to elaborate a convenient and original model of salivary recovery. Methods and Materials: Between January 2001 and December 2004, 44 patients were included (35 with oropharyngeal and 9 with nasopharyngeal cancer). Of the 44 patients, 24 were treated with intensity-modulated RT, 17 with three-dimensional conformal RT, and 2 with two-dimensional RT. Stimulated salivary production was collected for ≤24 months after RT. The data of salivary production, time of follow-up, and dose to parotid gland were modeled using a mixed model. Several models were developed to assess the best-fitting variable for the dose level to the parotid gland. Results: Models developed with the dose to the contralateral parotid fit the data slightly better than those with the dose to both parotids, suggesting that contralateral and ipsilateral parotid glands are not functionally equivalent even with the same dose level to the glands. The best predictive dose-value variable for salivary flow recovery was the volume of the contralateral parotid gland receiving >40 Gy. Conclusion: The results of this study show that the recommendation of a dose constraint for intensity-modulated RT planning should be established at the volume of the contralateral parotid gland receiving >40 Gy rather than the mean dose. For complete salivary production recovery after 24 months, the volume of the contralateral parotid gland receiving >40 Gy should be <33%. Our results permitted us to establish two convenient tools to predict the saliva production recovery function according to the dose received by the contralateral parotid gland

  13. Simplified ejector model for control and optimization

    International Nuclear Information System (INIS)

    Zhu Yinhai; Cai Wenjian; Wen Changyun; Li Yanzhong

    2008-01-01

    In this paper, a simple yet effective ejector model for a real time control and optimization of an ejector system is proposed. Firstly, a fundamental model for calculation of ejector entrainment ratio at critical working conditions is derived by one-dimensional analysis and the shock circle model. Then, based on thermodynamic principles and the lumped parameter method, the fundamental ejector model is simplified to result in a hybrid ejector model. The model is very simple, which only requires two or three parameters and measurement of two variables to determine the ejector performance. Furthermore, the procedures for on line identification of the model parameters using linear and non-linear least squares methods are also presented. Compared with existing ejector models, the solution of the proposed model is much easier without coupled equations and iterative computations. Finally, the effectiveness of the proposed model is validated by published experimental data. Results show that the model is accurate and robust and gives a better match to the real performances of ejectors over the entire operating range than the existing models. This model is expected to have wide applications in real time control and optimization of ejector systems

  14. Optimizing UV Index determination from broadband irradiances

    Science.gov (United States)

    Tereszchuk, Keith A.; Rochon, Yves J.; McLinden, Chris A.; Vaillancourt, Paul A.

    2018-03-01

    A study was undertaken to improve upon the prognosticative capability of Environment and Climate Change Canada's (ECCC) UV Index forecast model. An aspect of that work, and the topic of this communication, was to investigate the use of the four UV broadband surface irradiance fields generated by ECCC's Global Environmental Multiscale (GEM) numerical prediction model to determine the UV Index. The basis of the investigation involves the creation of a suite of routines which employ high-spectral-resolution radiative transfer code developed to calculate UV Index fields from GEM forecasts. These routines employ a modified version of the Cloud-J v7.4 radiative transfer model, which integrates GEM output to produce high-spectral-resolution surface irradiance fields. The output generated using the high-resolution radiative transfer code served to verify and calibrate GEM broadband surface irradiances under clear-sky conditions and their use in providing the UV Index. A subsequent comparison of irradiances and UV Index under cloudy conditions was also performed. Linear correlation agreement of surface irradiances from the two models for each of the two higher UV bands covering 310.70-330.0 and 330.03-400.00 nm is typically greater than 95 % for clear-sky conditions with associated root-mean-square relative errors of 6.4 and 4.0 %. However, underestimations of clear-sky GEM irradiances were found on the order of ˜ 30-50 % for the 294.12-310.70 nm band and by a factor of ˜ 30 for the 280.11-294.12 nm band. This underestimation can be significant for UV Index determination but would not impact weather forecasting. Corresponding empirical adjustments were applied to the broadband irradiances now giving a correlation coefficient of unity. From these, a least-squares fitting was derived for the calculation of the UV Index. The resultant differences in UV indices from the high-spectral-resolution irradiances and the resultant GEM broadband irradiances are typically within 0

  15. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels

    2004-01-01

    In the present work a framework for optimizing the design of boilers for dynamic operation has been developed. A cost function to be minimized during the optimization has been formulated and for the present design variables related to the Boiler Volume and the Boiler load Gradient (i.e. ring rate...... on the boiler) have been dened. Furthermore a number of constraints related to: minimum and maximum boiler load gradient, minimum boiler size, Shrinking and Swelling and Steam Space Load have been dened. For dening the constraints related to the required boiler volume a dynamic model for simulating the boiler...... performance has been developed. Outputs from the simulations are shrinking and swelling of water level in the drum during for example a start-up of the boiler, these gures combined with the requirements with respect to allowable water level uctuations in the drum denes the requirements with respect to drum...

  16. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    , and the total stress level (i.e. stresses introduced due to internal pressure plus stresses introduced due to temperature gradients) must always be kept below the allowable stress level. In this way, the increased water-/steam space that should allow for better dynamic performance, in the end causes limited...... freedom with respect to dynamic operation of the plant. By means of an objective function including as well the price of the plant as a quantification of the value of dynamic operation of the plant an optimization is carried out. The dynamic model of the boiler plant is applied to define parts...

  17. Modeling and optimization of potable water network

    Energy Technology Data Exchange (ETDEWEB)

    Djebedjian, B.; Rayan, M.A. [Mansoura Univ., El-Mansoura (Egypt); Herrick, A. [Suez Canal Authority, Ismailia (Egypt)

    2000-07-01

    Software was developed in order to optimize the design of water distribution systems and pipe networks. While satisfying all the constraints imposed such as pipe diameter and nodal pressure, it was based on a mathematical model treating looped networks. The optimum network configuration and cost are determined considering parameters like pipe diameter, flow rate, corresponding pressure and hydraulic losses. It must be understood that minimum cost is relative to the different objective functions selected. The determination of the proper objective function often depends on the operating policies of a particular company. The solution for the optimization technique was obtained by using a non-linear technique. To solve the optimal design of network, the model was derived using the sequential unconstrained minimization technique (SUMT) of Fiacco and McCormick, which decreased the number of iterations required. The pipe diameters initially assumed were successively adjusted to correspond to the existing commercial pipe diameters. The technique was then applied to a two-loop network without pumps or valves. Fed by gravity, it comprised eight pipes, 1000 m long each. The first evaluation of the method proved satisfactory. As with other methods, it failed to find the global optimum. In the future, research efforts will be directed to the optimization of networks with pumps and reservoirs. 24 refs., 3 tabs., 1 fig.

  18. Computer modeling for optimal placement of gloveboxes

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.; Olivas, J.D. [Los Alamos National Lab., NM (United States); Finch, P.R. [New Mexico State Univ., Las Cruces, NM (United States)

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  19. Computer modeling for optimal placement of gloveboxes

    International Nuclear Information System (INIS)

    Hench, K.W.; Olivas, J.D.; Finch, P.R.

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units

  20. A Method for Determining Optimal Residential Energy Efficiency Packages

    Energy Technology Data Exchange (ETDEWEB)

    Polly, B. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gestwick, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bianchi, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Anderson, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Horowitz, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Judkoff, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2011-04-01

    This report describes an analysis method for determining optimal residential energy efficiency retrofit packages and, as an illustrative example, applies the analysis method to a 1960s-era home in eight U.S. cities covering a range of International Energy Conservation Code (IECC) climate regions. The method uses an optimization scheme that considers average energy use (determined from building energy simulations) and equivalent annual cost to recommend optimal retrofit packages specific to the building, occupants, and location.

  1. Clean coal technology optimization model

    International Nuclear Information System (INIS)

    Laseke, B.A.; Hance, S.B.

    1992-01-01

    Title IV of the Clean Air Act Amendments (CAAA) of 1990 contains provisions for the mitigation of acid rain precipitation through reductions in the annual emission of the acid rain precursors of sulfur dioxide (SO 2 ) and nitrogen oxide (NO x ). These provisions will affect primarily existing coal-fired power-generating plants by requiring nominal reductions of 5 millon and 10 million tons of SO 2 by the years 1995 and 2000, respectively, and 2 million tons of NO x by the year 2000 relative to the 1980 and 1985-87 reference period. The 1990 CAAA Title IV provisions are extremely complex in that they establish phased regulatory milestones, unit-level emission allowances and caps, a mechanism for inter-utility trading of emission allowances, and a system of emission allowance credits based on selection of control option and timing of its implementation. The net result of Title IV of the 1990 CAAA is that approximately 147 gigawatts (GW) of generating capacity is eligible to retrofit SO 2 controls by the year 2000. A number of options are available to bring affected boilers into compliance with Title IV. Market sharewill be influenced by technology performance and costs. These characteristics can be modeled through a bottom-up technology cost and performance optimization exercise to show their impact on the technology's potential market share. Such a model exists in the form of an integrated data base-model software system. This microcomputer (PC)-based software system consists of a unit (boiler)-level data base (ACIDBASE), a cost and performance engineering model (IAPCS), and a market forecast model (ICEMAN)

  2. A framework for determining optimal petroleum leasing

    International Nuclear Information System (INIS)

    Robinson, D.R.

    1991-01-01

    The techniques of auction theory and option theory are combined to allow valuation under both geologic and oil price uncertainty. The primary motivation for developing this framework is to understand the prevalence of leasing in transferring ownership of oil properties. Under a standard oil lease, the landowner sells an oil company the right to explore and develop a tract of land for a fixed period of time. If oil is found, a fraction of the revenues is reserved for the landowner. Compared to the outright sale of the minerals, leasing has the disadvantages of: (1) lowering total oil field value through alteration of investment incentives; (2) providing the seller with a more risk cash flow; and (3) increasing legal and administrative costs. It is demonstrated here that in lease sales as compared to full mineral interest sales, the relative disadvantages are offset by more effective value transfer to the seller. For the base-case parameters, the optimal lease in a bonus auction gives the seller 28% more value than the sale of the full mineral interest. There is a loss in the leasing process from distortion of development timing incentives

  3. Surrogate Modeling for Geometry Optimization

    DEFF Research Database (Denmark)

    Rojas Larrazabal, Marielba de la Caridad; Abraham, Yonas; Holzwarth, Natalie

    2009-01-01

    A new approach for optimizing the nuclear geometry of an atomic system is described. Instead of the original expensive objective function (energy functional), a small number of simpler surrogates is used.......A new approach for optimizing the nuclear geometry of an atomic system is described. Instead of the original expensive objective function (energy functional), a small number of simpler surrogates is used....

  4. Following an Optimal Batch Bioreactor Operations Model

    DEFF Research Database (Denmark)

    Ibarra-Junquera, V.; Jørgensen, Sten Bay; Virgen-Ortíz, J.J.

    2012-01-01

    The problem of following an optimal batch operation model for a bioreactor in the presence of uncertainties is studied. The optimal batch bioreactor operation model (OBBOM) refers to the bioreactor trajectory for nominal cultivation to be optimal. A multiple-variable dynamic optimization of fed...... as the master system which includes the optimal cultivation trajectory for the feed flow rate and the substrate concentration. The “real” bioreactor, the one with unknown dynamics and perturbations, is considered as the slave system. Finally, the controller is designed such that the real bioreactor...

  5. Intelligent structural optimization: Concept, Model and Methods

    International Nuclear Information System (INIS)

    Lu, Dagang; Wang, Guangyuan; Peng, Zhang

    2002-01-01

    Structural optimization has many characteristics of Soft Design, and so, it is necessary to apply the experience of human experts to solving the uncertain and multidisciplinary optimization problems in large-scale and complex engineering systems. With the development of artificial intelligence (AI) and computational intelligence (CI), the theory of structural optimization is now developing into the direction of intelligent optimization. In this paper, a concept of Intelligent Structural Optimization (ISO) is proposed. And then, a design process model of ISO is put forward in which each design sub-process model are discussed. Finally, the design methods of ISO are presented

  6. Multipurpose optimization models for high level waste vitrification

    International Nuclear Information System (INIS)

    Hoza, M.

    1994-08-01

    Optimal Waste Loading (OWL) models have been developed as multipurpose tools for high-level waste studies for the Tank Waste Remediation Program at Hanford. Using nonlinear programming techniques, these models maximize the waste loading of the vitrified waste and optimize the glass formers composition such that the glass produced has the appropriate properties within the melter, and the resultant vitrified waste form meets the requirements for disposal. The OWL model can be used for a single waste stream or for blended streams. The models can determine optimal continuous blends or optimal discrete blends of a number of different wastes. The OWL models have been used to identify the most restrictive constraints, to evaluate prospective waste pretreatment methods, to formulate and evaluate blending strategies, and to determine the impacts of variability in the wastes. The OWL models will be used to aid in the design of frits and the maximize the waste in the glass for High-Level Waste (HLW) vitrification

  7. Optimization in engineering models and algorithms

    CERN Document Server

    Sioshansi, Ramteen

    2017-01-01

    This textbook covers the fundamentals of optimization, including linear, mixed-integer linear, nonlinear, and dynamic optimization techniques, with a clear engineering focus. It carefully describes classical optimization models and algorithms using an engineering problem-solving perspective, and emphasizes modeling issues using many real-world examples related to a variety of application areas. Providing an appropriate blend of practical applications and optimization theory makes the text useful to both practitioners and students, and gives the reader a good sense of the power of optimization and the potential difficulties in applying optimization to modeling real-world systems. The book is intended for undergraduate and graduate-level teaching in industrial engineering and other engineering specialties. It is also of use to industry practitioners, due to the inclusion of real-world applications, opening the door to advanced courses on both modeling and algorithm development within the industrial engineering ...

  8. The State Fiscal Policy: Determinants and Optimization of Financial Flows

    Directory of Open Access Journals (Sweden)

    Sitash Tetiana D.

    2017-03-01

    Full Text Available The article outlines the determinants of the state fiscal policy at the present stage of global transformations. Using the principles of financial science it is determined that regulation of financial flows within the fiscal sphere, namely centralization and redistribution of the GDP, which results in the regulation of the financial capacity of economic agents, is of importance. It is emphasized that the urgent measure for improving the tax model is re-considering the provision of fiscal incentives, which are used to stimulate the accumulation of capital, investment activity, innovation, increase of the competitiveness of national products, expansion of exports, increase of the level of the population employment. The necessity of applying the instruments of fiscal regulation of financial flows, which should take place on the basis of institutional economics emphasizing the analysis of institutional changes, the evolution of institutions and their impact on the behavior of participants of economic relations. At the same time it is determined that the maximum effect of fiscal regulation of financial flows is ensured when application of fiscal instruments is aimed not only at achieving the target values of parameters of financial flows but at overcoming institutional deformations as well. It is determined that the optimal movement of financial flows enables creating favorable conditions for development and maintenance of financial balance in the society and achievement of the necessary level of competitiveness of the national economy.

  9. Optimal Hedging with the Vector Autoregressive Model

    NARCIS (Netherlands)

    L. Gatarek (Lukasz); S.G. Johansen (Soren)

    2014-01-01

    markdownabstract__Abstract__ We derive the optimal hedging ratios for a portfolio of assets driven by a Cointegrated Vector Autoregressive model with general cointegration rank. Our hedge is optimal in the sense of minimum variance portfolio. We consider a model that allows for the hedges to be

  10. Rethinking exchange market models as optimization algorithms

    Science.gov (United States)

    Luquini, Evandro; Omar, Nizam

    2018-02-01

    The exchange market model has mainly been used to study the inequality problem. Although the human society inequality problem is very important, the exchange market models dynamics until stationary state and its capability of ranking individuals is interesting in itself. This study considers the hypothesis that the exchange market model could be understood as an optimization procedure. We present herein the implications for algorithmic optimization and also the possibility of a new family of exchange market models

  11. Optimal Policy in OG Models

    DEFF Research Database (Denmark)

    Ghiglino, Christian; Tvede, Mich

    for generations, through fiscal policy, i.e. monetary transfers and taxes. Both situations with and without time discounting are considered. It is shown that if the discount factor is suffciently close to one then the optimal policy stabilizes the economy, i.e. the equilibrium path has the turnpike property...

  12. Optimal Policy in OG Models

    DEFF Research Database (Denmark)

    Ghiglino, Christian; Tvede, Mich

    2000-01-01

    for generations, through fiscal policy, i.e., monetary transfers and taxes. Situations both with and without time discounting are considered. It is shown that if the discount factor is sufficiently close to one then the optimal policy stabilizes the economy, i.e. the equilibrium path has the turnpike property...

  13. A Monte Carlo simulation technique to determine the optimal portfolio

    Directory of Open Access Journals (Sweden)

    Hassan Ghodrati

    2014-03-01

    Full Text Available During the past few years, there have been several studies for portfolio management. One of the primary concerns on any stock market is to detect the risk associated with various assets. One of the recognized methods in order to measure, to forecast, and to manage the existing risk is associated with Value at Risk (VaR, which draws much attention by financial institutions in recent years. VaR is a method for recognizing and evaluating of risk, which uses the standard statistical techniques and the method has been used in other fields, increasingly. The present study has measured the value at risk of 26 companies from chemical industry in Tehran Stock Exchange over the period 2009-2011 using the simulation technique of Monte Carlo with 95% confidence level. The used variability in the present study has been the daily return resulted from the stock daily price change. Moreover, the weight of optimal investment has been determined using a hybrid model called Markowitz and Winker model in each determined stocks. The results showed that the maximum loss would not exceed from 1259432 Rials at 95% confidence level in future day.

  14. Optimal sampling theory and population modelling - Application to determination of the influence of the microgravity environment on drug distribution and elimination

    Science.gov (United States)

    Drusano, George L.

    1991-01-01

    The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.

  15. Handbook on modelling for discrete optimization

    CERN Document Server

    Pitsoulis, Leonidas; Williams, H

    2006-01-01

    The primary objective underlying the Handbook on Modelling for Discrete Optimization is to demonstrate and detail the pervasive nature of Discrete Optimization. While its applications cut across an incredibly wide range of activities, many of the applications are only known to specialists. It is the aim of this handbook to correct this. It has long been recognized that "modelling" is a critically important mathematical activity in designing algorithms for solving these discrete optimization problems. Nevertheless solving the resultant models is also often far from straightforward. In recent years it has become possible to solve many large-scale discrete optimization problems. However, some problems remain a challenge, even though advances in mathematical methods, hardware, and software technology have pushed the frontiers forward. This handbook couples the difficult, critical-thinking aspects of mathematical modeling with the hot area of discrete optimization. It will be done in an academic handbook treatment...

  16. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  17. On the Determinants of Optimal Border Taxes for a Small Open Economy

    DEFF Research Database (Denmark)

    Munk, Knud Jørgen; Rasmussen, Bo Sandemann

    of the primary factor and domestic consumption of the export good cannot be taxed is nevertheless a constraint; this insight provides the key to understanding what determines the optimal tariff structure. The optimal border tax structure is derived for both exogenous and endogenous labour supply, and the results...... are interpreted in the spirit of the Corlett-Hague results for the optimal tax structure in a closed economy and compared with results from CGE models....

  18. Method for Determining Optimal Residential Energy Efficiency Retrofit Packages

    Energy Technology Data Exchange (ETDEWEB)

    Polly, B.; Gestwick, M.; Bianchi, M.; Anderson, R.; Horowitz, S.; Christensen, C.; Judkoff, R.

    2011-04-01

    Businesses, government agencies, consumers, policy makers, and utilities currently have limited access to occupant-, building-, and location-specific recommendations for optimal energy retrofit packages, as defined by estimated costs and energy savings. This report describes an analysis method for determining optimal residential energy efficiency retrofit packages and, as an illustrative example, applies the analysis method to a 1960s-era home in eight U.S. cities covering a range of International Energy Conservation Code (IECC) climate regions. The method uses an optimization scheme that considers average energy use (determined from building energy simulations) and equivalent annual cost to recommend optimal retrofit packages specific to the building, occupants, and location. Energy savings and incremental costs are calculated relative to a minimum upgrade reference scenario, which accounts for efficiency upgrades that would occur in the absence of a retrofit because of equipment wear-out and replacement with current minimum standards.

  19. Modelling, simulating and optimizing boiler heating surfaces and evaporator circuits

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    A model for optimizing the dynamic performance of boiler have been developed. Design variables related to the size of the boiler and its dynamic performance have been defined. The object function to be optimized takes the weight of the boiler and its dynamic capability into account. As constraints...... for the optimization a dynamic model for the boiler is applied. Furthermore a function for the value of the dynamic performance is included in the model. The dynamic models for simulating boiler performance consists of a model for the flue gas side, a model for the evaporator circuit and a model for the drum....... The dynamic model has been developed for the purpose of determining boiler material temperatures and heat transfer from the flue gas side to the water-/steam side in order to simulate the circulation in the evaporator circuit and hereby the water level fluctuations in the drum. The dynamic model has been...

  20. Modeling and optimization of laser cutting operations

    Directory of Open Access Journals (Sweden)

    Gadallah Mohamed Hassan

    2015-01-01

    Full Text Available Laser beam cutting is one important nontraditional machining process. This paper optimizes the parameters of laser beam cutting parameters of stainless steel (316L considering the effect of input parameters such as power, oxygen pressure, frequency and cutting speed. Statistical design of experiments is carried in three different levels and process responses such as average kerf taper (Ta, surface roughness (Ra and heat affected zones are measured accordingly. A response surface model is developed as a function of the process parameters. Responses predicted by the models (as per Taguchi’s L27OA are employed to search for an optimal combination to achieve desired process yield. Response Surface Models (RSMs are developed for mean responses, S/N ratio, and standard deviation of responses. Optimization models are formulated as single objective optimization problem subject to process constraints. Models are formulated based on Analysis of Variance (ANOVA and optimized using Matlab developed environment. Optimum solutions are compared with Taguchi Methodology results. As such, practicing engineers have means to model, analyze and optimize nontraditional machining processes. Validation experiments are carried to verify the developed models with success.

  1. Mathematical modeling and optimization of complex structures

    CERN Document Server

    Repin, Sergey; Tuovinen, Tero

    2016-01-01

    This volume contains selected papers in three closely related areas: mathematical modeling in mechanics, numerical analysis, and optimization methods. The papers are based upon talks presented  on the International Conference for Mathematical Modeling and Optimization in Mechanics, held in Jyväskylä, Finland, March 6-7, 2014 dedicated to Prof. N. Banichuk on the occasion of his 70th birthday. The articles are written by well-known scientists working in computational mechanics and in optimization of complicated technical models. Also, the volume contains papers discussing the historical development, the state of the art, new ideas, and open problems arising in  modern continuum mechanics and applied optimization problems. Several papers are concerned with mathematical problems in numerical analysis, which are also closely related to important mechanical models. The main topics treated include:  * Computer simulation methods in mechanics, physics, and biology;  * Variational problems and methods; minimiz...

  2. Code Differentiation for Hydrodynamic Model Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Henninger, R.J.; Maudlin, P.J.

    1999-06-27

    Use of a hydrodynamics code for experimental data fitting purposes (an optimization problem) requires information about how a computed result changes when the model parameters change. These so-called sensitivities provide the gradient that determines the search direction for modifying the parameters to find an optimal result. Here, the authors apply code-based automatic differentiation (AD) techniques applied in the forward and adjoint modes to two problems with 12 parameters to obtain these gradients and compare the computational efficiency and accuracy of the various methods. They fit the pressure trace from a one-dimensional flyer-plate experiment and examine the accuracy for a two-dimensional jet-formation problem. For the flyer-plate experiment, the adjoint mode requires similar or less computer time than the forward methods. Additional parameters will not change the adjoint mode run time appreciably, which is a distinct advantage for this method. Obtaining ''accurate'' sensitivities for the j et problem parameters remains problematic.

  3. Maintenance Optimization of High Voltage Substation Model

    Directory of Open Access Journals (Sweden)

    Radim Bris

    2008-01-01

    Full Text Available The real system from practice is selected for optimization purpose in this paper. We describe the real scheme of a high voltage (HV substation in different work states. Model scheme of the HV substation 22 kV is demonstrated within the paper. The scheme serves as input model scheme for the maintenance optimization. The input reliability and cost parameters of all components are given: the preventive and corrective maintenance costs, the actual maintenance period (being optimized, the failure rate and mean time to repair - MTTR.

  4. A useful framework for optimal replacement models

    International Nuclear Information System (INIS)

    Aven, Terje; Dekker, Rommert

    1997-01-01

    In this note we present a general framework for optimization of replacement times. It covers a number of models, including various age and block replacement models, and allows a uniform analysis for all these models. A relation to the marginal cost concept is described

  5. Multiobjective optimization of an extremal evolution model

    International Nuclear Information System (INIS)

    Elettreby, M.F.

    2004-09-01

    We propose a two-dimensional model for a co-evolving ecosystem that generalizes the extremal coupled map lattice model. The model takes into account the concept of multiobjective optimization. We find that the system self-organizes into a critical state. The distributions of the distances between subsequent mutations as well as the distribution of avalanches sizes follow power law. (author)

  6. Modeling and optimization of HVAC energy consumption

    Energy Technology Data Exchange (ETDEWEB)

    Kusiak, Andrew; Li, Mingyang; Tang, Fan [Department of Mechanical and Industrial Engineering, University of Iowa, Iowa City, IA 52242 - 1527 (United States)

    2010-10-15

    A data-driven approach for minimization of the energy to air condition a typical office-type facility is presented. Eight data-mining algorithms are applied to model the nonlinear relationship among energy consumption, control settings (supply air temperature and supply air static pressure), and a set of uncontrollable parameters. The multiple-linear perceptron (MLP) ensemble outperforms other models tested in this research, and therefore it is selected to model a chiller, a pump, a fan, and a reheat device. These four models are integrated into an energy optimization model with two decision variables, the setpoint of the supply air temperature and the static pressure in the air handling unit. The model is solved with a particle swarm optimization algorithm. The optimization results have demonstrated the total energy consumed by the heating, ventilation, and air-conditioning system is reduced by over 7%. (author)

  7. Optimization Models for Petroleum Field Exploitation

    Energy Technology Data Exchange (ETDEWEB)

    Jonsbraaten, Tore Wiig

    1998-12-31

    This thesis presents and discusses various models for optimal development of a petroleum field. The objective of these optimization models is to maximize, under many uncertain parameters, the project`s expected net present value. First, an overview of petroleum field optimization is given from the point of view of operations research. Reservoir equations for a simple reservoir system are derived and discretized and included in optimization models. Linear programming models for optimizing production decisions are discussed and extended to mixed integer programming models where decisions concerning platform, wells and production strategy are optimized. Then, optimal development decisions under uncertain oil prices are discussed. The uncertain oil price is estimated by a finite set of price scenarios with associated probabilities. The problem is one of stochastic mixed integer programming, and the solution approach is to use a scenario and policy aggregation technique developed by Rockafellar and Wets although this technique was developed for continuous variables. Stochastic optimization problems with focus on problems with decision dependent information discoveries are also discussed. A class of ``manageable`` problems is identified and an implicit enumeration algorithm for finding optimal decision policy is proposed. Problems involving uncertain reservoir properties but with a known initial probability distribution over possible reservoir realizations are discussed. Finally, a section on Nash-equilibrium and bargaining in an oil reservoir management game discusses the pool problem arising when two lease owners have access to the same underlying oil reservoir. Because the oil tends to migrate, both lease owners have incentive to drain oil from the competitors part of the reservoir. The discussion is based on a numerical example. 107 refs., 31 figs., 14 tabs.

  8. Modeling and optimization of wet sizing process

    International Nuclear Information System (INIS)

    Thai Ba Cau; Vu Thanh Quang and Nguyen Ba Tien

    2004-01-01

    Mathematical simulation on basis of Stock law has been done for wet sizing process on cylinder equipment of laboratory and semi-industrial scale. The model consists of mathematical equations describing relations between variables, such as: - Resident time distribution function of emulsion particles in the separating zone of the equipment depending on flow-rate, height, diameter and structure of the equipment. - Size-distribution function in the fine and coarse parts depending on resident time distribution function of emulsion particles, characteristics of the material being processed, such as specific density, shapes, and characteristics of the environment of classification, such as specific density, viscosity. - Experimental model was developed on data collected from an experimental cylindrical equipment with diameter x height of sedimentation chamber equal to 50 x 40 cm for an emulsion of zirconium silicate in water. - Using this experimental model allows to determine optimal flow-rate in order to obtain product with desired grain size in term of average size or size distribution function. (author)

  9. Determining the optimal spacing of deepening of vertical mine

    Energy Technology Data Exchange (ETDEWEB)

    Durov, Ye.M.

    1983-01-01

    Light is shed on a technique for determining the optimal spacing of shafts for deepening for the examined parameters of operational and deepening operations. The presented results of studies may be used in designing new shafts, in preparing levels and in reconstruction of existing shafts with slanted and steep stratum bedding.

  10. Human error considerations and annunciator effects in determining optimal test intervals for periodically inspected standby systems

    International Nuclear Information System (INIS)

    McWilliams, T.P.; Martz, H.F.

    1981-01-01

    This paper incorporates the effects of four types of human error in a model for determining the optimal time between periodic inspections which maximizes the steady state availability for standby safety systems. Such safety systems are characteristic of nuclear power plant operations. The system is modeled by means of an infinite state-space Markov chain. Purpose of the paper is to demonstrate techniques for computing steady-state availability A and the optimal periodic inspection interval tau* for the system. The model can be used to investigate the effects of human error probabilities on optimal availability, study the benefits of annunciating the standby-system, and to determine optimal inspection intervals. Several examples which are representative of nuclear power plant applications are presented

  11. Review: Optimization methods for groundwater modeling and management

    Science.gov (United States)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  12. Problems in determining the optimal use of road safety measures

    DEFF Research Database (Denmark)

    Elvik, Rune

    2014-01-01

    for intervention that ensures maximum safety benefits. The third problem is how to develop policy options to minimise the risk of indivisibilities and irreversible choices. The fourth problem is how to account for interaction effects between road safety measures when determining their optimal use. The fifth......This paper discusses some problems in determining the optimal use of road safety measures. The first of these problems is how best to define the baseline option, i.e. what will happen if no new safety measures are introduced. The second problem concerns choice of a method for selection of targets...... problem is how to obtain the best mix of short-term and long-term measures in a safety programme. The sixth problem is how fixed parameters for analysis, including the monetary valuation of road safety, influence the results of analyses. It is concluded that it is at present not possible to determine...

  13. Mathematical model of highways network optimization

    Science.gov (United States)

    Sakhapov, R. L.; Nikolaeva, R. V.; Gatiyatullin, M. H.; Makhmutov, M. M.

    2017-12-01

    The article deals with the issue of highways network design. Studies show that the main requirement from road transport for the road network is to ensure the realization of all the transport links served by it, with the least possible cost. The goal of optimizing the network of highways is to increase the efficiency of transport. It is necessary to take into account a large number of factors that make it difficult to quantify and qualify their impact on the road network. In this paper, we propose building an optimal variant for locating the road network on the basis of a mathematical model. The article defines the criteria for optimality and objective functions that reflect the requirements for the road network. The most fully satisfying condition for optimality is the minimization of road and transport costs. We adopted this indicator as a criterion of optimality in the economic-mathematical model of a network of highways. Studies have shown that each offset point in the optimal binding road network is associated with all other corresponding points in the directions providing the least financial costs necessary to move passengers and cargo from this point to the other corresponding points. The article presents general principles for constructing an optimal network of roads.

  14. Parameter Optimization of MIMO Fuzzy Optimal Model Predictive Control By APSO

    Directory of Open Access Journals (Sweden)

    Adel Taieb

    2017-01-01

    Full Text Available This paper introduces a new development for designing a Multi-Input Multi-Output (MIMO Fuzzy Optimal Model Predictive Control (FOMPC using the Adaptive Particle Swarm Optimization (APSO algorithm. The aim of this proposed control, called FOMPC-APSO, is to develop an efficient algorithm that is able to have good performance by guaranteeing a minimal control. This is done by determining the optimal weights of the objective function. Our method is considered an optimization problem based on the APSO algorithm. The MIMO system to be controlled is modeled by a Takagi-Sugeno (TS fuzzy system whose parameters are identified using weighted recursive least squares method. The utility of the proposed controller is demonstrated by applying it to two nonlinear processes, Continuous Stirred Tank Reactor (CSTR and Tank system, where the proposed approach provides better performances compared with other methods.

  15. Statistical models for optimizing mineral exploration

    International Nuclear Information System (INIS)

    Wignall, T.K.; DeGeoffroy, J.

    1987-01-01

    The primary purpose of mineral exploration is to discover ore deposits. The emphasis of this volume is on the mathematical and computational aspects of optimizing mineral exploration. The seven chapters that make up the main body of the book are devoted to the description and application of various types of computerized geomathematical models. These chapters include: (1) the optimal selection of ore deposit types and regions of search, as well as prospecting selected areas, (2) designing airborne and ground field programs for the optimal coverage of prospecting areas, and (3) delineating and evaluating exploration targets within prospecting areas by means of statistical modeling. Many of these statistical programs are innovative and are designed to be useful for mineral exploration modeling. Examples of geomathematical models are applied to exploring for six main types of base and precious metal deposits, as well as other mineral resources (such as bauxite and uranium)

  16. Dynamic optimization deterministic and stochastic models

    CERN Document Server

    Hinderer, Karl; Stieglitz, Michael

    2016-01-01

    This book explores discrete-time dynamic optimization and provides a detailed introduction to both deterministic and stochastic models. Covering problems with finite and infinite horizon, as well as Markov renewal programs, Bayesian control models and partially observable processes, the book focuses on the precise modelling of applications in a variety of areas, including operations research, computer science, mathematics, statistics, engineering, economics and finance. Dynamic Optimization is a carefully presented textbook which starts with discrete-time deterministic dynamic optimization problems, providing readers with the tools for sequential decision-making, before proceeding to the more complicated stochastic models. The authors present complete and simple proofs and illustrate the main results with numerous examples and exercises (without solutions). With relevant material covered in four appendices, this book is completely self-contained.

  17. Modeling and optimization of LCD optical performance

    CERN Document Server

    Yakovlev, Dmitry A; Kwok, Hoi-Sing

    2015-01-01

    The aim of this book is to present the theoretical foundations of modeling the optical characteristics of liquid crystal displays, critically reviewing modern modeling methods and examining areas of applicability. The modern matrix formalisms of optics of anisotropic stratified media, most convenient for solving problems of numerical modeling and optimization of LCD, will be considered in detail. The benefits of combined use of the matrix methods will be shown, which generally provides the best compromise between physical adequacy and accuracy with computational efficiency and optimization fac

  18. Modelling and Optimizing Mathematics Learning in Children

    Science.gov (United States)

    Käser, Tanja; Busetto, Alberto Giovanni; Solenthaler, Barbara; Baschera, Gian-Marco; Kohn, Juliane; Kucian, Karin; von Aster, Michael; Gross, Markus

    2013-01-01

    This study introduces a student model and control algorithm, optimizing mathematics learning in children. The adaptive system is integrated into a computer-based training system for enhancing numerical cognition aimed at children with developmental dyscalculia or difficulties in learning mathematics. The student model consists of a dynamic…

  19. Models and Methods for Structural Topology Optimization with Discrete Design Variables

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    in the conceptual design phase to find innovative designs. The strength of topology optimization is the capability of determining both the optimal shape and the topology of the structure. In some cases also the optimal material properties can be determined. Optimal structural design problems are modeled...... such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used in the conceptual design phase to find innovative designs. The strength of topology optimization is the capability of determining both the optimal......Structural topology optimization is a multi-disciplinary research field covering optimal design of load carrying mechanical structures such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used...

  20. Modeling, simulation and optimization of bipedal walking

    CERN Document Server

    Berns, Karsten

    2013-01-01

    The model-based investigation of motions of anthropomorphic systems is an important interdisciplinary research topic involving specialists from many fields such as Robotics, Biomechanics, Physiology, Orthopedics, Psychology, Neurosciences, Sports, Computer Graphics and Applied Mathematics. This book presents a study of basic locomotion forms such as walking and running is of particular interest due to the high demand on dynamic coordination, actuator efficiency and balance control. Mathematical models and numerical simulation and optimization techniques are explained, in combination with experimental data, which can help to better understand the basic underlying mechanisms of these motions and to improve them. Example topics treated in this book are Modeling techniques for anthropomorphic bipedal walking systems Optimized walking motions for different objective functions Identification of objective functions from measurements Simulation and optimization approaches for humanoid robots Biologically inspired con...

  1. Particle swarm optimization for determining shortest distance to voltage collapse

    Energy Technology Data Exchange (ETDEWEB)

    Arya, L.D.; Choube, S.C. [Electrical Engineering Department, S.G.S.I.T.S. Indore, MP 452 003 (India); Shrivastava, M. [Electrical Engineering Department, Government Engineering College Ujjain, MP 456 010 (India); Kothari, D.P. [Centre for Energy Studies, Indian Institute of Technology, Delhi (India)

    2007-12-15

    This paper describes an algorithm for computing shortest distance to voltage collapse or determination of CSNBP using PSO technique. A direction along CSNBP gives conservative results from voltage security view point. This information is useful to the operator to steer the system away from this point by taking corrective actions. The distance to a closest bifurcation is a minimum of the loadability given a slack bus or participation factors for increasing generation as the load increases. CSNBP determination has been formulated as an optimization problem to be used in PSO technique. PSO is a new evolutionary algorithm (EA) which is population based inspired by the social behavior of animals such as fish schooling and birds flocking. It can handle optimization problems with any complexity since mechanization is simple with few parameters to be tuned. The developed algorithm has been implemented on two standard test systems. (author)

  2. Optimal Patent Life in a Variety-Expansion Growth Model

    OpenAIRE

    Lin, Hwan C.

    2013-01-01

    This paper presents more channels through which the optimal patent life is determined in a R&D-based endogenous growth model that permits growth of new varieties of consumer goods over time. Its modeling features include an endogenous hazard rate facing incumbent monopolists, the prevalence of research congestion, and the aggregate welfare importance of product differentiation. As a result, a patent’s effective life is endogenized and less than its legal life. The model is calibrated to a glo...

  3. Optimizing Classroom Acoustics Using Computer Model Studies.

    Science.gov (United States)

    Reich, Rebecca; Bradley, John

    1998-01-01

    Investigates conditions relating to the maximum useful-to-detrimental sound ratios present in classrooms and determining the optimum conditions for speech intelligibility. Reveals that speech intelligibility is more strongly influenced by ambient noise levels and that the optimal location for sound absorbing material is on a classroom's upper…

  4. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2017-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 17-19, 2016. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, health, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  5. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2015-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 13-15, 2014. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, healthcare, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  6. METHODS FOR DETERMINATION AND OPTIMIZATION OF LOGISTICS COSTS

    Directory of Open Access Journals (Sweden)

    Mihaela STET

    2016-12-01

    Full Text Available The paper is dealing with the problems of logistics costs, highlighting some methods for estimation and determination of specific costs for different transport modes in freight distribution. There are highlighted, besides costs of transports, the other costs in supply chain, as well as costing methods used in logistics activities. In this context, there are also revealed some optimization means of transport costs in logistics chain.

  7. METHODS FOR DETERMINATION AND OPTIMIZATION OF LOGISTICS COSTS

    OpenAIRE

    Mihaela STET

    2016-01-01

    The paper is dealing with the problems of logistics costs, highlighting some methods for estimation and determination of specific costs for different transport modes in freight distribution. There are highlighted, besides costs of transports, the other costs in supply chain, as well as costing methods used in logistics activities. In this context, there are also revealed some optimization means of transport costs in logistics chain.

  8. Determination of coefficient matrices for ARMA model

    International Nuclear Information System (INIS)

    Tran Dinh Tri.

    1990-10-01

    A new recursive algorithm for determining coefficient matrices of ARMA model from measured data is presented. The Yule-Walker equations for the case of ARMA model are derived from the ARMA innovation equation. The recursive algorithm is based on choosing appropriate form of the operator functions and suitable representation of the (n+1)-th order operator functions according to ones with the lower order. Two cases, when the order of the AR part is equal to one of the MA part, and the optimal case, were considered. (author) 5 refs

  9. A model of optimal voluntary muscular control.

    Science.gov (United States)

    FitzHugh, R

    1977-07-19

    In the absence of detailed knowledge of how the CNS controls a muscle through its motor fibers, a reasonable hypothesis is that of optimal control. This hypothesis is studied using a simplified mathematical model of a single muscle, based on A.V. Hill's equations, with series elastic element omitted, and with the motor signal represented by a single input variable. Two cost functions were used. The first was total energy expended by the muscle (work plus heat). If the load is a constant force, with no inertia, Hill's optimal velocity of shortening results. If the load includes a mass, analysis by optimal control theory shows that the motor signal to the muscle consists of three phases: (1) maximal stimulation to accelerate the mass to the optimal velocity as quickly as possible, (2) an intermediate level of stimulation to hold the velocity at its optimal value, once reached, and (3) zero stimulation, to permit the mass to slow down, as quickly as possible, to zero velocity at the specified distance shortened. If the latter distance is too small, or the mass too large, the optimal velocity is not reached, and phase (2) is absent. For lengthening, there is no optimal velocity; there are only two phases, zero stimulation followed by maximal stimulation. The second cost function was total time. The optimal control for shortening consists of only phases (1) and (3) above, and is identical to the minimal energy control whenever phase (2) is absent from the latter. Generalization of this model to include viscous loads and a series elastic element are discussed.

  10. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  11. Optimal Tax Reduction by Depreciation : A Stochastic Model

    NARCIS (Netherlands)

    Berg, M.; De Waegenaere, A.M.B.; Wielhouwer, J.L.

    1996-01-01

    This paper focuses on the choice of a depreciation method, when trying to minimize the expected value of the present value of future tax payments.In a quite general model that allows for stochastic future cash- ows and a tax structure with tax brackets, we determine the optimal choice between the

  12. GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2011-01-01

    Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image χ 2 and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest χ 2 is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.

  13. Optimal foraging in marine ecosystem models: selectivity, profitability and switching

    DEFF Research Database (Denmark)

    Visser, Andre W.; Fiksen, Ø.

    2013-01-01

    ecological mechanics and evolutionary logic as a solution to diet selection in ecosystem models. When a predator can consume a range of prey items it has to choose which foraging mode to use, which prey to ignore and which ones to pursue, and animals are known to be particularly skilled in adapting...... to the preference functions commonly used in models today. Indeed, depending on prey class resolution, optimal foraging can yield feeding rates that are considerably different from the ‘switching functions’ often applied in marine ecosystem models. Dietary inclusion is dictated by two optimality choices: 1...... by letting predators maximize energy intake or more properly, some measure of fitness where predation risk and cost are also included. An optimal foraging or fitness maximizing approach will give marine ecosystem models a sound principle to determine trophic interactions...

  14. Fuzzy Stochastic Optimization Theory, Models and Applications

    CERN Document Server

    Wang, Shuming

    2012-01-01

    Covering in detail both theoretical and practical perspectives, this book is a self-contained and systematic depiction of current fuzzy stochastic optimization that deploys the fuzzy random variable as a core mathematical tool to model the integrated fuzzy random uncertainty. It proceeds in an orderly fashion from the requisite theoretical aspects of the fuzzy random variable to fuzzy stochastic optimization models and their real-life case studies.   The volume reflects the fact that randomness and fuzziness (or vagueness) are two major sources of uncertainty in the real world, with significant implications in a number of settings. In industrial engineering, management and economics, the chances are high that decision makers will be confronted with information that is simultaneously probabilistically uncertain and fuzzily imprecise, and optimization in the form of a decision must be made in an environment that is doubly uncertain, characterized by a co-occurrence of randomness and fuzziness. This book begins...

  15. Optimal inventory management and order book modeling

    KAUST Repository

    Baradel, Nicolas

    2018-02-16

    We model the behavior of three agent classes acting dynamically in a limit order book of a financial asset. Namely, we consider market makers (MM), high-frequency trading (HFT) firms, and institutional brokers (IB). Given a prior dynamic of the order book, similar to the one considered in the Queue-Reactive models [14, 20, 21], the MM and the HFT define their trading strategy by optimizing the expected utility of terminal wealth, while the IB has a prescheduled task to sell or buy many shares of the considered asset. We derive the variational partial differential equations that characterize the value functions of the MM and HFT and explain how almost optimal control can be deduced from them. We then provide a first illustration of the interactions that can take place between these different market participants by simulating the dynamic of an order book in which each of them plays his own (optimal) strategy.

  16. A PROCEDURE FOR DETERMINING OPTIMAL FACILITY LOCATION AND SUB-OPTIMAL POSITIONS

    Directory of Open Access Journals (Sweden)

    P.K. Dan

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This research presents a methodology for determining the optimal location of a new facility, having physical flow interaction of various degrees with other existing facilities in the presence of barriers impeding the shortest flow-path as well as the sub-optimal iso-cost positions. It also determines sub-optimal iso-cost positions with additional cost or penalty for not being able to site it at the computed optimal point. The proposed methodology considers all types of quadrilateral barrier or forbidden region configurations to generalize and by-pass such impenetrable obstacles, and adopts a scheme of searching through the vertices of the quadrilaterals to determine the alternative shortest flow-path. This procedure of obstacle avoidance is novel. Software has been developed to facilitate computations for the search algorithm to determine the optimal and iso-cost co-ordinates. The test results are presented.

    AFRIKAANSE OPSOMMING: Die navorsing behandel ‘n procedure vir die bepaling van optimum stigtingsposisie vir ‘n onderneming met vloei vanaf ander bestaande fasiliteite in die teenwoordigheid van ‘n verskeidenheid van randvoorwaardes. Die prodedure lewer as resultaat suboptimale isokoste-stigtingsplekke met bekendmaking van die koste wat onstaan a.g.v. afwyking van die randvoorwaardlose optimum oplossingskoste, die prosedure maak gebruik van ‘n vindingryke soekmetode wat toegepas word op niersydige meerkundige voorstellings vir die bepaling van korste roetes wat versperring omseil. Die prosedure word onderskei deur programmatuur. Toetsresultate word voorgehou.

  17. Applied probability models with optimization applications

    CERN Document Server

    Ross, Sheldon M

    1992-01-01

    Concise advanced-level introduction to stochastic processes that frequently arise in applied probability. Largely self-contained text covers Poisson process, renewal theory, Markov chains, inventory theory, Brownian motion and continuous time optimization models, much more. Problems and references at chapter ends. ""Excellent introduction."" - Journal of the American Statistical Association. Bibliography. 1970 edition.

  18. Procedural Optimization Models for Multiobjective Flexible JSSP

    Directory of Open Access Journals (Sweden)

    Elena Simona NICOARA

    2013-01-01

    Full Text Available The most challenging issues related to manufacturing efficiency occur if the jobs to be sched-uled are structurally different, if these jobs allow flexible routings on the equipments and mul-tiple objectives are required. This framework, called Multi-objective Flexible Job Shop Scheduling Problems (MOFJSSP, applicable to many real processes, has been less reported in the literature than the JSSP framework, which has been extensively formalized, modeled and analyzed from many perspectives. The MOFJSSP lie, as many other NP-hard problems, in a tedious place where the vast optimization theory meets the real world context. The paper brings to discussion the most optimization models suited to MOFJSSP and analyzes in detail the genetic algorithms and agent-based models as the most appropriate procedural models.

  19. Computer models for optimizing radiation therapy

    International Nuclear Information System (INIS)

    Duechting, W.

    1998-01-01

    The aim of this contribution is to outline how methods of system analysis, control therapy and modelling can be applied to simulate normal and malignant cell growth and to optimize cancer treatment as for instance radiation therapy. Based on biological observations and cell kinetic data, several types of models have been developed describing the growth of tumor spheroids and the cell renewal of normal tissue. The irradiation model is represented by the so-called linear-quadratic model describing the survival fraction as a function of the dose. Based thereon, numerous simulation runs for different treatment schemes can be performed. Thus, it is possible to study the radiation effect on tumor and normal tissue separately. Finally, this method enables a computer-assisted recommendation for an optimal patient-specific treatment schedule prior to clinical therapy. (orig.) [de

  20. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  1. Optimization and mathematical modeling in computer architecture

    CERN Document Server

    Sankaralingam, Karu; Nowatzki, Tony

    2013-01-01

    In this book we give an overview of modeling techniques used to describe computer systems to mathematical optimization tools. We give a brief introduction to various classes of mathematical optimization frameworks with special focus on mixed integer linear programming which provides a good balance between solver time and expressiveness. We present four detailed case studies -- instruction set customization, data center resource management, spatial architecture scheduling, and resource allocation in tiled architectures -- showing how MILP can be used and quantifying by how much it outperforms t

  2. Optimizing refiner operation with statistical modelling

    Energy Technology Data Exchange (ETDEWEB)

    Broderick, G [Noranda Research Centre, Pointe Claire, PQ (Canada)

    1997-02-01

    The impact of refining conditions on the energy efficiency of the process and on the handsheet quality of a chemi-mechanical pulp was studied as part of a series of pilot scale refining trials. Statistical models of refiner performance were constructed from these results and non-linear optimization of process conditions were conducted. Optimization results indicated that increasing the ratio of specific energy applied in the first stage led to a reduction of some 15 per cent in the total energy requirement. The strategy can also be used to obtain significant increases in pulp quality for a given energy input. 20 refs., 6 tabs.

  3. Perturbing engine performance measurements to determine optimal engine control settings

    Science.gov (United States)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-12-30

    Methods and systems for optimizing a performance of a vehicle engine are provided. The method includes determining an initial value for a first engine control parameter based on one or more detected operating conditions of the vehicle engine, determining a value of an engine performance variable, and artificially perturbing the determined value of the engine performance variable. The initial value for the first engine control parameter is then adjusted based on the perturbed engine performance variable causing the engine performance variable to approach a target engine performance variable. Operation of the vehicle engine is controlled based on the adjusted initial value for the first engine control parameter. These acts are repeated until the engine performance variable approaches the target engine performance variable.

  4. A study on an optimal movement model

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton BN1 9QH, UK (United Kingdom); Zhang, Kewei [SMS, Sussex University, Brighton BN1 9QH (United Kingdom); Luo Yousong [Department of Mathematics and Statistics, RMIT University, GOP Box 2476V, Melbourne, Vic 3001 (Australia)

    2003-07-11

    We present an analytical and rigorous study on a TOPS (task optimization in the presence of signal-dependent noise) model with a hold-on or an end-point control. Optimal control signals are rigorously obtained, which enables us to investigate various issues about the model including its trajectories, velocities, control signals, variances and the dependence of these quantities on various model parameters. With the hold-on control, we find that the optimal control can be implemented with an almost 'nil' hold-on period. The optimal control signal is a linear combination of two sub-control signals. One of the sub-control signals is positive and the other is negative. With the end-point control, the end-point variance is dramatically reduced, in comparison with the hold-on control. However, the velocity is not symmetric (bell shape). Finally, we point out that the velocity with a hold-on control takes the bell shape only within a limited parameter region.

  5. Aerodynamic modelling and optimization of axial fans

    Energy Technology Data Exchange (ETDEWEB)

    Noertoft Soerensen, Dan

    1998-01-01

    A numerically efficient mathematical model for the aerodynamics of low speed axial fans of the arbitrary vortex flow type has been developed. The model is based on a blade-element principle, whereby the rotor is divided into a number of annular stream tubes. For each of these stream tubes relations for velocity, pressure and radial position are derived from the conservation laws for mass, tangential momentum and energy. The equations are solved using the Newton-Raphson methods, and solutions converged to machine accuracy are found at small computing costs. The model has been validated against published measurements on various fan configurations, comprising two rotor-only fan stages, a counter-rotating fan unit and a stator-rotor stator stage. Comparisons of local and integrated properties show that the computed results agree well with the measurements. Optimizations have been performed to maximize the mean value of fan efficiency in a design interval of flow rates, thus designing a fan which operates well over a range of different flow conditions. The optimization scheme was used to investigate the dependence of maximum efficiency on 1: the number of blades, 2: the width of the design interval and 3: the hub radius. The degree of freedom in the choice of design variable and constraints, combined with the design interval concept, provides a valuable design-tool for axial fans. To further investigate the use of design optimization, a model for the vortex shedding noise from the trailing edge of the blades has been incorporated into the optimization scheme. The noise emission from the blades was minimized in a flow rate design point. Optimizations were performed to investigate the dependence of the noise on 1: the number of blades, 2: a constraint imposed on efficiency and 3: the hub radius. The investigations showed, that a significant reduction of noise could be achieved, at the expense of a small reduction in fan efficiency. (EG) 66 refs.

  6. MARKOV CHAIN PORTFOLIO LIQUIDITY OPTIMIZATION MODEL

    Directory of Open Access Journals (Sweden)

    Eder Oliveira Abensur

    2014-05-01

    Full Text Available The international financial crisis of September 2008 and May 2010 showed the importance of liquidity as an attribute to be considered in portfolio decisions. This study proposes an optimization model based on available public data, using Markov chain and Genetic Algorithms concepts as it considers the classic duality of risk versus return and incorporating liquidity costs. The work intends to propose a multi-criterion non-linear optimization model using liquidity based on a Markov chain. The non-linear model was tested using Genetic Algorithms with twenty five Brazilian stocks from 2007 to 2009. The results suggest that this is an innovative development methodology and useful for developing an efficient and realistic financial portfolio, as it considers many attributes such as risk, return and liquidity.

  7. Efficient Iris Localization via Optimization Model

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2017-01-01

    Full Text Available Iris localization is one of the most important processes in iris recognition. Because of different kinds of noises in iris image, the localization result may be wrong. Besides this, localization process is time-consuming. To solve these problems, this paper develops an efficient iris localization algorithm via optimization model. Firstly, the localization problem is modeled by an optimization model. Then SIFT feature is selected to represent the characteristic information of iris outer boundary and eyelid for localization. And SDM (Supervised Descent Method algorithm is employed to solve the final points of outer boundary and eyelids. Finally, IRLS (Iterative Reweighted Least-Square is used to obtain the parameters of outer boundary and upper and lower eyelids. Experimental result indicates that the proposed algorithm is efficient and effective.

  8. Global Optimization Ensemble Model for Classification Methods

    Science.gov (United States)

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  9. Global Optimization Ensemble Model for Classification Methods

    Directory of Open Access Journals (Sweden)

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  10. Identifying optimal models to represent biochemical systems.

    Directory of Open Access Journals (Sweden)

    Mochamad Apri

    Full Text Available Biochemical systems involving a high number of components with intricate interactions often lead to complex models containing a large number of parameters. Although a large model could describe in detail the mechanisms that underlie the system, its very large size may hinder us in understanding the key elements of the system. Also in terms of parameter identification, large models are often problematic. Therefore, a reduced model may be preferred to represent the system. Yet, in order to efficaciously replace the large model, the reduced model should have the same ability as the large model to produce reliable predictions for a broad set of testable experimental conditions. We present a novel method to extract an "optimal" reduced model from a large model to represent biochemical systems by combining a reduction method and a model discrimination method. The former assures that the reduced model contains only those components that are important to produce the dynamics observed in given experiments, whereas the latter ensures that the reduced model gives a good prediction for any feasible experimental conditions that are relevant to answer questions at hand. These two techniques are applied iteratively. The method reveals the biological core of a model mathematically, indicating the processes that are likely to be responsible for certain behavior. We demonstrate the algorithm on two realistic model examples. We show that in both cases the core is substantially smaller than the full model.

  11. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Shengliang Zong

    2017-01-01

    Full Text Available We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requirement. Based on this average cost function, we propose the genetic algorithm to locate the optimal replacement policy N to minimize the average cost rate. The results show that the GA is effective and efficient in finding the optimal solutions. The availability of equipment has significance effect on the optimal replacement policy. Many practical systems fit the model developed in this paper.

  12. Spectroscopic determination of optimal hydration time of zircon surface

    Energy Technology Data Exchange (ETDEWEB)

    Ordonez R, E. [ININ, Departamento de Quimica, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Garcia R, G. [Instituto Tecnologico de Toluca, Division de Estudios del Posgrado, Av. Tecnologico s/n, Ex-Rancho La Virgen, 52140 Metepec, Estado de Mexico (Mexico); Garcia G, N., E-mail: eduardo.ordonez@inin.gob.m [Universidad Autonoma del Estado de Mexico, Facultad de Quimica, Av. Colon y Av. Tollocan, 50180 Toluca, Estado de Mexico (Mexico)

    2010-07-01

    When a mineral surface is immersed in an aqueous solution, it develops and electric charge produced by the amphoteric dissociation of hydroxyl groups created by the hydration of the solid surface. This is one influential surface property. The complete hydration process takes a time which is specific for each mineral species. The knowledge of the aqueous solution contact time for complete surface hydration is mandatory for further surface phenomena studies. This study deals with the optimal hydration time of the raw zircon (ZrSiO{sub 4}) surface comparing the classical potentiometric titrations with a fluorescence spectroscopy technique. The latter is easy and rea liable as it demands only one sample batch to determine the optimal time to ensure a total hydration of the zircon surface. The analytical results of neutron activation analysis showed the presence of trace quantities of Dy{sup 3+}, Eu{sup 3+} and Er{sup 3} in the bulk of zircon. The Dy{sup 3+} is structured in the zircon crystalline lattice and undergoes the same chemical reactions as zircon. Furthermore, the Dy{sup 3+} has a good fluorescent response whose intensity is enhanced by hydration molecules. The results show that, according to the potentiometric analysis, the hydration process for each batch (at least 8 sample batches) takes around 2 h, while the spectrometric method indicates only 5 minutes from only one batch. Both methods showed that the zircon surface have a 16 h optimal hydration time. (Author)

  13. Spectroscopic determination of optimal hydration time of zircon surface

    International Nuclear Information System (INIS)

    Ordonez R, E.; Garcia R, G.; Garcia G, N.

    2010-01-01

    When a mineral surface is immersed in an aqueous solution, it develops and electric charge produced by the amphoteric dissociation of hydroxyl groups created by the hydration of the solid surface. This is one influential surface property. The complete hydration process takes a time which is specific for each mineral species. The knowledge of the aqueous solution contact time for complete surface hydration is mandatory for further surface phenomena studies. This study deals with the optimal hydration time of the raw zircon (ZrSiO 4 ) surface comparing the classical potentiometric titrations with a fluorescence spectroscopy technique. The latter is easy and rea liable as it demands only one sample batch to determine the optimal time to ensure a total hydration of the zircon surface. The analytical results of neutron activation analysis showed the presence of trace quantities of Dy 3+ , Eu 3+ and Er 3 in the bulk of zircon. The Dy 3+ is structured in the zircon crystalline lattice and undergoes the same chemical reactions as zircon. Furthermore, the Dy 3+ has a good fluorescent response whose intensity is enhanced by hydration molecules. The results show that, according to the potentiometric analysis, the hydration process for each batch (at least 8 sample batches) takes around 2 h, while the spectrometric method indicates only 5 minutes from only one batch. Both methods showed that the zircon surface have a 16 h optimal hydration time. (Author)

  14. Determination of optimal conditions of oxytetracyclin production from streptomyces rimosus

    International Nuclear Information System (INIS)

    Zouaghi, Atef

    2007-01-01

    Streptomyces rimosus is an oxytetracycline (OTC) antibiotic producing bacteria that exhibited activities against gram positive and negative bacteria. OTC is used widely not only in medicine but also in production industry. The antibiotic production of streptomyces covers a very wide range of condition. However, antibiotic producers are particularly fastidious cultivated by proper selection of media such as carbon source. In present study we have optimised conditions of OTC production (Composition of production media, p H, shaking and temperature). The results have been shown that bran barley is the optimal media for OTC production at 28C pH5.8 at 150rpm for 5 days. For antibiotic determination, OTC was extracted with different organic solvent. Thin-layer chromatography system was used for separation and identification of OTC antibiotic. High performance liquid chromatographic (HPLC) method with ultraviolet detection for the analysis of OTC is applied to the determination of OTC purification. (Author). 24 refs

  15. Behavioral optimization models for multicriteria portfolio selection

    Directory of Open Access Journals (Sweden)

    Mehlawat Mukesh Kumar

    2013-01-01

    Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.

  16. Optimal transportation networks models and theory

    CERN Document Server

    Bernot, Marc; Morel, Jean-Michel

    2009-01-01

    The transportation problem can be formalized as the problem of finding the optimal way to transport a given measure into another with the same mass. In contrast to the Monge-Kantorovitch problem, recent approaches model the branched structure of such supply networks as minima of an energy functional whose essential feature is to favour wide roads. Such a branched structure is observable in ground transportation networks, in draining and irrigation systems, in electrical power supply systems and in natural counterparts such as blood vessels or the branches of trees. These lectures provide mathematical proof of several existence, structure and regularity properties empirically observed in transportation networks. The link with previous discrete physical models of irrigation and erosion models in geomorphology and with discrete telecommunication and transportation models is discussed. It will be mathematically proven that the majority fit in the simple model sketched in this volume.

  17. Optimization of determination of 126Sn by ion exchange chromatography method (presentation)

    International Nuclear Information System (INIS)

    Pasteka, L.; Dulanska, S.

    2013-01-01

    The aim of the work is to optimize the uptake of tin on anion exchange resins and application of this knowledge for the analysis of samples of radioactive waste from the device of Jaslovske Bohunice and Mochovce in determining of 126 Sn. First to be optimized a method for the separation of tin on ion exchange sorbent Anion Exchange Resin (1-X8, Chloride Form) from Eichrom Technologies. Model sample was prepared in 7 mol dm -3 HCl, because in that environment a sorbent effectively captures the tin, which is bounded complexly with chloride anions as SnCl 6 2- . The radiochemical separation yield was monitored by gamma spectrometric measurements on high purity germanium detector HPGe (E = 391 keV) by adding isotope 113 Sn to each model solution. The method of tin separation was optimized on model samples.

  18. Heuristic Optimization Techniques for Determining Optimal Reserve Structure of Power Generating Systems

    DEFF Research Database (Denmark)

    Ding, Yi; Goel, Lalit; Wang, Peng

    2012-01-01

    cost of the system will also increase. The reserve structure of a MSS should be determined based on striking a balance between the required reliability and the reserve cost. The objective of reserve management for a MSS is to schedule the reserve at the minimum system reserve cost while maintaining......Electric power generating systems are typical examples of multi-state systems (MSS). Sufficient reserve is critically important for maintaining generating system reliabilities. The reliability of a system can be increased by increasing the reserve capacity, noting that at the same time the reserve...... the required level of supply reliability to its customers. In previous research, Genetic Algorithm (GA) has been used to solve most reliability optimization problems. However, the GA is not very computationally efficient in some cases. In this chapter a new heuristic optimization technique—the particle swarm...

  19. Image-Optimized Coronal Magnetic Field Models

    Science.gov (United States)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.

    2017-01-01

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work we presented early tests of the method which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane, and the effect on the outcome of the optimization of errors in localization of constraints. We find that substantial improvement in the model field can be achieved with this type of constraints, even when magnetic features in the images are located outside of the image plane.

  20. Image-optimized Coronal Magnetic Field Models

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M., E-mail: shaela.i.jones-mecholsky@nasa.gov, E-mail: shaela.i.jonesmecholsky@nasa.gov [NASA Goddard Space Flight Center, Code 670, Greenbelt, MD 20771 (United States)

    2017-08-01

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.

  1. An Optimization Waste Load Allocation Model in River Systems

    Science.gov (United States)

    Amirpoor Daylami, A.; jarihani, A. A.; Aminisola, K.

    2012-04-01

    In many river systems, increasing of the waste discharge leads to increasing pollution of these water bodies. While the capacity of the river flow for pollution acceptance is limited and the ability of river to clean itself is restricted, the dischargers have to release their waste into the river after a primary pollution treatment process. Waste Load Allocation as a well-known water quality control strategy is used to determine the optimal pollutant removal at a number of point sources along the river. This paper aim at developing a new approach for treatment and management of wastewater inputs into the river systems, such that water quality standards in these receiving waters are met. In this study, inspired by the fact that cooperation among some single point source waste dischargers can lead to a more waste acceptance capacity and/or more optimum quality control in a river, an efficient approach was implemented to determine both primary waste water treatment levels and/or the best releasing points of the waste into the river. In this methodology, a genetic algorithm is used as an optimization tool to calculate optimal fraction removal levels of each one of single or shared discharger. Besides, a sub-model embedded to optimization model was used to simulate water quality of the river in each one of discharging scenarios based on the modified Streeter and Phelps quality equations. The practical application of the model is illustrated with a case study of the Gharesoo river system in west of Iran.

  2. Combined optimization model for sustainable energization strategy

    Science.gov (United States)

    Abtew, Mohammed Seid

    Access to energy is a foundation to establish a positive impact on multiple aspects of human development. Both developed and developing countries have a common concern of achieving a sustainable energy supply to fuel economic growth and improve the quality of life with minimal environmental impacts. The Least Developing Countries (LDCs), however, have different economic, social, and energy systems. Prevalence of power outage, lack of access to electricity, structural dissimilarity between rural and urban regions, and traditional fuel dominance for cooking and the resultant health and environmental hazards are some of the distinguishing characteristics of these nations. Most energy planning models have been designed for developed countries' socio-economic demographics and have missed the opportunity to address special features of the poor countries. An improved mixed-integer programming energy-source optimization model is developed to address limitations associated with using current energy optimization models for LDCs, tackle development of the sustainable energization strategies, and ensure diversification and risk management provisions in the selected energy mix. The Model predicted a shift from traditional fuels reliant and weather vulnerable energy source mix to a least cost and reliable modern clean energy sources portfolio, a climb on the energy ladder, and scored multifaceted economic, social, and environmental benefits. At the same time, it represented a transition strategy that evolves to increasingly cleaner energy technologies with growth as opposed to an expensive solution that leapfrogs immediately to the cleanest possible, overreaching technologies.

  3. Optimization of hybrid model on hajj travel

    Science.gov (United States)

    Cahyandari, R.; Ariany, R. L.; Sukono

    2018-03-01

    Hajj travel insurance is an insurance product offered by the insurance company in preparing funds to perform the pilgrimage. This insurance product helps would-be pilgrims to set aside a fund of saving hajj with regularly, but also provides funds of profit sharing (mudharabah) and insurance protection. Scheme of insurance product fund management is largely using the hybrid model, which is the fund from would-be pilgrims will be divided into three account management, that is personal account, tabarru’, and ujrah. Scheme of hybrid model on hajj travel insurance was already discussed at the earlier paper with titled “The Hybrid Model Algorithm on Sharia Insurance”, taking the example case of Mitra Mabrur Plus product from Bumiputera company. On these advanced paper, will be made the previous optimization model design, with partition of benefit the tabarru’ account. Benefits such as compensation for 40 critical illness which initially only for participants of insurance only, on optimization is intended for participants of the insurance and his heir, also to benefit the hospital bills. Meanwhile, the benefits of death benefit is given if the participant is fixed die.

  4. Overconfidence, Managerial Optimism, and the Determinants of Capital Structure

    Directory of Open Access Journals (Sweden)

    Alexandre di Miceli da Silveira

    2008-12-01

    Full Text Available This research examines the determinants of the capital structure of firms introducing a behavioral perspective that has received little attention in corporate finance literature. The following central hypothesis emerges from a set of recently developed theories: firms managed by optimistic and/or overconfident people will choose more levered financing structures than others, ceteris paribus. We propose different proxies for optimism/overconfidence, based on the manager’s status as an entrepreneur or non-entrepreneur, an idea that is supported by theories and solid empirical evidence, as well as on the pattern of ownership of the firm’s shares by its manager. The study also includes potential determinants of capital structure used in earlier research. We use a sample of Brazilian firms listed in the Sao Paulo Stock Exchange (Bovespa in the years 1998 to 2003. The empirical analysis suggests that the proxies for the referred cognitive biases are important determinants of capital structure. We also found as relevant explanatory variables: profitability, size, dividend payment and tangibility, as well as some indicators that capture the firms’ corporate governance standards. These results suggest that behavioral approaches based on human psychology research can offer relevant contributions to the understanding of corporate decision making.

  5. Optimal control in a model of malaria with differential susceptibility

    Science.gov (United States)

    Hincapié, Doracelly; Ospina, Juan

    2014-06-01

    A malaria model with differential susceptibility is analyzed using the optimal control technique. In the model the human population is classified as susceptible, infected and recovered. Susceptibility is assumed dependent on genetic, physiological, or social characteristics that vary between individuals. The model is described by a system of differential equations that relate the human and vector populations, so that the infection is transmitted to humans by vectors, and the infection is transmitted to vectors by humans. The model considered is analyzed using the optimal control method when the control consists in using of insecticide-treated nets and educational campaigns; and the optimality criterion is to minimize the number of infected humans, while keeping the cost as low as is possible. One first goal is to determine the effects of differential susceptibility in the proposed control mechanism; and the second goal is to determine the algebraic form of the basic reproductive number of the model. All computations are performed using computer algebra, specifically Maple. It is claimed that the analytical results obtained are important for the design and implementation of control measures for malaria. It is suggested some future investigations such as the application of the method to other vector-borne diseases such as dengue or yellow fever; and also it is suggested the possible application of free software of computer algebra like Maxima.

  6. An optimal generic model for multi-parameters and big data optimizing: a laboratory experimental study

    Science.gov (United States)

    Utama, D. N.; Ani, N.; Iqbal, M. M.

    2018-03-01

    Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.

  7. Determination and optimization of spatial samples for distributed measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Huo, Xiaoming (Georgia Institute of Technology, Atlanta, GA); Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong (Georgia Institute of Technology, Atlanta, GA)

    2010-10-01

    There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.

  8. Markowitz portfolio optimization model employing fuzzy measure

    Science.gov (United States)

    Ramli, Suhailywati; Jaaman, Saiful Hafizah

    2017-04-01

    Markowitz in 1952 introduced the mean-variance methodology for the portfolio selection problems. His pioneering research has shaped the portfolio risk-return model and become one of the most important research fields in modern finance. This paper extends the classical Markowitz's mean-variance portfolio selection model applying the fuzzy measure to determine the risk and return. In this paper, we apply the original mean-variance model as a benchmark, fuzzy mean-variance model with fuzzy return and the model with return are modeled by specific types of fuzzy number for comparison. The model with fuzzy approach gives better performance as compared to the mean-variance approach. The numerical examples are included to illustrate these models by employing Malaysian share market data.

  9. Business model optimization of Prego Gourmet

    OpenAIRE

    Salema, José Frederico Bettencourt

    2013-01-01

    A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics Prego Gourmet is a fast food restaurant which sells refined versions of a traditional Portuguese dish inside shopping centers in the area of Lisbon. The company is at the beginning of its expansion strategy. This work project is a prospective analysis on what the company should do to in order to optimize its business model and grow in Portug...

  10. Determination of optimal electrode positions for transcranial direct current stimulation (tDCS)

    International Nuclear Information System (INIS)

    Im, Chang-Hwan; Jung, Hui-Hun; Choi, Jung-Do; Lee, Soo Yeol; Jung, Ki-Young

    2008-01-01

    The present study introduces a new approach to determining optimal electrode positions in transcranial direct current stimulation (tDCS). Electric field and 3D conduction current density were analyzed using 3D finite element method (FEM) formulated for a dc conduction problem. The electrode positions for minimal current injection were optimized by changing the Cartesian coordinate system into the spherical coordinate system and applying the (2+6) evolution strategy (ES) algorithm. Preliminary simulation studies applied to a standard three-layer head model demonstrated that the proposed approach is promising in enhancing the performance of tDCS. (note)

  11. Determination of optimal electrode positions for transcranial direct current stimulation (tDCS)

    Energy Technology Data Exchange (ETDEWEB)

    Im, Chang-Hwan; Jung, Hui-Hun; Choi, Jung-Do [Department of Biomedical Engineering, Yonsei University, Wonju, 220-710 (Korea, Republic of); Lee, Soo Yeol [Department of Biomedical Engineering, Kyung Hee University, Suwon (Korea, Republic of); Jung, Ki-Young [Korea University Medical Center, Korea University College of Medicine, Seoul (Korea, Republic of)], E-mail: ich@yonsei.ac.kr

    2008-06-07

    The present study introduces a new approach to determining optimal electrode positions in transcranial direct current stimulation (tDCS). Electric field and 3D conduction current density were analyzed using 3D finite element method (FEM) formulated for a dc conduction problem. The electrode positions for minimal current injection were optimized by changing the Cartesian coordinate system into the spherical coordinate system and applying the (2+6) evolution strategy (ES) algorithm. Preliminary simulation studies applied to a standard three-layer head model demonstrated that the proposed approach is promising in enhancing the performance of tDCS. (note)

  12. PEMILIHAN SAHAM YANG OPTIMAL MENGGUNAKAN CAPITAL ASSET PRICING MODEL (CAPM

    Directory of Open Access Journals (Sweden)

    Dioda Ardi Wibisono

    2017-08-01

    Full Text Available Optimal portfolio is the basis for investors to invest in stock. Capital Asset Pricing Model (CAPM is a method to determine the value of the risk and return of a company stock. This research uses a secondary data from the closing price of the monthly stock price (monthly closing price, Stock Price Index (SPI, and the monthly SBI rate. The samples of this research are 41 stocks in LQ45 February-July 2015 on the Indonesian Stock Exchange (ISE. The study period is during 5 year from October 2010 - October 2015. The result of analysis shows that the optimal portfolio consists of 18 companies. The average return of the optimal portfolio is higher than the average risk-free return (SBI rate and the average market return. This proves that investing in stocks is more profitable than a risk-free investment. � Keywords: Stock, CAPM, return, risk�

  13. A projection method for under determined optimal experimental designs

    KAUST Repository

    Long, Quan; Scavino, Marco; Tempone, Raul; Wang, Suojin

    2014-01-01

    A new implementation, based on the Laplace approximation, was developed in (Long, Scavino, Tempone, & Wang 2013) to accelerate the estimation of the post–experimental expected information gains in the model parameters and predictive quantities of interest. A closed–form approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general cases where the model parameters could not be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the corresponding Jacobian matrix, so that the information gain (Kullback–Leibler divergence) can be reduced to an integration against the marginal density of the transformed parameters which are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the projected posterior covariance matrix. To deal with the issue of dimensionality in a complex problem, we use Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under determined numerical examples.

  14. A projection method for under determined optimal experimental designs

    KAUST Repository

    Long, Quan

    2014-01-09

    A new implementation, based on the Laplace approximation, was developed in (Long, Scavino, Tempone, & Wang 2013) to accelerate the estimation of the post–experimental expected information gains in the model parameters and predictive quantities of interest. A closed–form approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general cases where the model parameters could not be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the corresponding Jacobian matrix, so that the information gain (Kullback–Leibler divergence) can be reduced to an integration against the marginal density of the transformed parameters which are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the projected posterior covariance matrix. To deal with the issue of dimensionality in a complex problem, we use Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under determined numerical examples.

  15. Optimal evolution models for quantum tomography

    International Nuclear Information System (INIS)

    Czerwiński, Artur

    2016-01-01

    The research presented in this article concerns the stroboscopic approach to quantum tomography, which is an area of science where quantum physics and linear algebra overlap. In this article we introduce the algebraic structure of the parametric-dependent quantum channels for 2-level and 3-level systems such that the generator of evolution corresponding with the Kraus operators has no degenerate eigenvalues. In such cases the index of cyclicity of the generator is equal to 1, which physically means that there exists one observable the measurement of which performed a sufficient number of times at distinct instants provides enough data to reconstruct the initial density matrix and, consequently, the trajectory of the state. The necessary conditions for the parameters and relations between them are introduced. The results presented in this paper seem to have considerable potential applications in experiments due to the fact that one can perform quantum tomography by conducting only one kind of measurement. Therefore, the analyzed evolution models can be considered optimal in the context of quantum tomography. Finally, we introduce some remarks concerning optimal evolution models in the case of n-dimensional Hilbert space. (paper)

  16. Process optimization of friction stir welding based on thermal models

    DEFF Research Database (Denmark)

    Larsen, Anders Astrup

    2010-01-01

    This thesis investigates how to apply optimization methods to numerical models of a friction stir welding process. The work is intended as a proof-of-concept using different methods that are applicable to models of high complexity, possibly with high computational cost, and without the possibility...... information of the high-fidelity model. The optimization schemes are applied to stationary thermal models of differing complexity of the friction stir welding process. The optimization problems considered are based on optimizing the temperature field in the workpiece by finding optimal translational speed....... Also an optimization problem based on a microstructure model is solved, allowing the hardness distribution in the plate to be optimized. The use of purely thermal models represents a simplification of the real process; nonetheless, it shows the applicability of the optimization methods considered...

  17. An optimization model for transportation of hazardous materials

    International Nuclear Information System (INIS)

    Seyed-Hosseini, M.; Kheirkhah, A. S.

    2005-01-01

    In this paper, the optimal routing problem for transportation of hazardous materials is studied. Routing for the purpose of reducing the risk of transportation of hazardous materials has been studied and formulated by many researcher and several routing models have been presented up to now. These models can be classified into the categories: the models for routing a single movement and the models for routing multiple movements. In this paper, according to the current rules and regulations of road transportations of hazardous materials in Iran, a routing problem is designed. In this problem, the routs for several independent movements are simultaneously determined. To examine the model, the problem the transportations of two different dangerous materials in the road network of Mazandaran province in the north of Iran is formulated and solved by applying Integer programming model

  18. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    OpenAIRE

    Zong, Shengliang; Chai, Guorong; Su, Yana

    2017-01-01

    We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requi...

  19. Optimal Experimental Design for Model Discrimination

    Science.gov (United States)

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  20. Modeling, Analysis, and Optimization Issues for Large Space Structures

    Science.gov (United States)

    Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)

    1983-01-01

    Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.

  1. Optimizing ZigBee Security using Stochastic Model Checking

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    , we identify an important gap in the specification on key updates, and present a methodology for determining optimal key update policies and security parameters. We exploit the stochastic model checking approach using the probabilistic model checker PRISM, and assess the security needs for realistic......ZigBee is a fairly new but promising wireless sensor network standard that offers the advantages of simple and low resource communication. Nevertheless, security is of great concern to ZigBee, and enhancements are prescribed in the latest ZigBee specication: ZigBee-2007. In this technical report...

  2. Model-based dynamic control and optimization of gas networks

    Energy Technology Data Exchange (ETDEWEB)

    Hofsten, Kai

    2001-07-01

    This work contributes to the research on control, optimization and simulation of gas transmission systems to support the dispatch personnel at gas control centres for the decision makings in the daily operation of the natural gas transportation systems. Different control and optimization strategies have been studied. The focus is on the operation of long distance natural gas transportation systems. Stationary optimization in conjunction with linear model predictive control using state space models is proposed for supply security, the control of quality parameters and minimization of transportation costs for networks offering transportation services. The result from the stationary optimization together with a reformulation of a simplified fluid flow model formulates a linear dynamic optimization model. This model is used in a finite time control and state constrained linear model predictive controller. The deviation from the control and the state reference determined from the stationary optimization is penalized quadratically. Because of the time varying status of infrastructure, the control space is also generally time varying. When the average load is expected to change considerably, a new stationary optimization is performed, giving a new state and control reference together with a new dynamic model that is used for both optimization and state estimation. Another proposed control strategy is a control and output constrained nonlinear model predictive controller for the operation of gas transmission systems. Here, the objective is also the security of the supply, quality control and minimization of transportation costs. An output vector is defined, which together with a control vector are both penalized quadratically from their respective references in the objective function. The nonlinear model predictive controller can be combined with a stationary optimization. At each sampling instant, a non convex nonlinear programming problem is solved giving a local minimum

  3. Modeling and optimization of planar microcoils

    International Nuclear Information System (INIS)

    Beyzavi, Ali; Nguyen, Nam-Trung

    2008-01-01

    Magnetic actuation has emerged as a useful tool for manipulating particles, droplets and biological samples in microfluidics. A planar coil is one of the suitable candidates for magnetic actuation and has the potential to be integrated in digital microfluidic devices. A simple model of microcoils is needed to optimize their use in actuation applications. This paper first develops an analytical model for calculating the magnetic field of a planar microcoil. The model was validated by experimental data from microcoils fabricated on printed circuit boards (PCB). The model was used for calculating the field strength and the force acting on a magnetic object. Finally, the effect of different coil parameters such as the magnitude of the electric current, the gap between the wires and the number of wire segments is discussed. Both analytical and experimental results show that a smaller gap size between wire segments, more wire segments and a higher electric current can increase both the magnitude and the gradient of the magnetic field, and consequently cause a higher actuating force. The planar coil analyzed in the paper is suitable for applications in magnetic droplet-based microfluidics

  4. Confined compressive strength model of rock for drilling optimization

    Directory of Open Access Journals (Sweden)

    Xiangchao Shi

    2015-03-01

    Full Text Available The confined compressive strength (CCS plays a vital role in drilling optimization. On the basis of Jizba's experimental results, a new CCS model considering the effects of the porosity and nonlinear characteristics with increasing confining pressure has been developed. Because the confining pressure plays a fundamental role in determining the CCS of bottom-hole rock and because the theory of Terzaghi's effective stress principle is founded upon soil mechanics, which is not suitable for calculating the confining pressure in rock mechanics, the double effective stress theory, which treats the porosity as a weighting factor of the formation pore pressure, is adopted in this study. The new CCS model combined with the mechanical specific energy equation is employed to optimize the drilling parameters in two practical wells located in Sichuan basin, China, and the calculated results show that they can be used to identify the inefficient drilling situations of underbalanced drilling (UBD and overbalanced drilling (OBD.

  5. Multiobjective Optimization Model for Wind Power Allocation

    Directory of Open Access Journals (Sweden)

    Juan Alemany

    2017-01-01

    Full Text Available There is an increasing need for the injection to the grid of renewable energy; therefore, to evaluate the optimal location of new renewable generation is an important task. The primary purpose of this work is to develop a multiobjective optimization model that permits finding multiple trade-off solutions for the location of new wind power resources. It is based on the augmented ε-constrained methodology. Two competitive objectives are considered: maximization of preexisting energy injection and maximization of new wind energy injection, both embedded, in the maximization of load supply. The results show that the location of new renewable generation units affects considerably the transmission network flows, the load supply, and the preexisting energy injection. Moreover, there are diverse opportunities to benefit the preexisting generation, contrarily to the expected effect where renewable generation displaces conventional power. The proposed methodology produces a diverse range of equivalent solutions, expanding and enriching the horizon of options and giving flexibility to the decision-making process.

  6. A combined geostatistical-optimization model for the optimal design of a groundwater quality monitoring network

    Science.gov (United States)

    Kolosionis, Konstantinos; Papadopoulou, Maria P.

    2017-04-01

    Monitoring networks provide essential information for water resources management especially in areas with significant groundwater exploitation due to extensive agricultural activities. In this work, a simulation-optimization framework is developed based on heuristic optimization methodologies and geostatistical modeling approaches to obtain an optimal design for a groundwater quality monitoring network. Groundwater quantity and quality data obtained from 43 existing observation locations at 3 different hydrological periods in Mires basin in Crete, Greece will be used in the proposed framework in terms of Regression Kriging to develop the spatial distribution of nitrates concentration in the aquifer of interest. Based on the existing groundwater quality mapping, the proposed optimization tool will determine a cost-effective observation wells network that contributes significant information to water managers and authorities. The elimination of observation wells that add little or no beneficial information to groundwater level and quality mapping of the area can be obtain using estimations uncertainty and statistical error metrics without effecting the assessment of the groundwater quality. Given the high maintenance cost of groundwater monitoring networks, the proposed tool could used by water regulators in the decision-making process to obtain a efficient network design that is essential.

  7. Kanban simulation model for production process optimization

    Directory of Open Access Journals (Sweden)

    Golchev Riste

    2015-01-01

    Full Text Available A long time has passed since the KANBAN system has been established as an efficient method for coping with the excessive inventory. Still, the possibilities for its improvement through its integration with other different approaches should be investigated further. The basic research challenge of this paper is to present benefits of KANBAN implementation supported with Discrete Event Simulation (DES. In that direction, at the beginning, the basics of KANBAN system are presented with emphasis on the information and material flow, together with a methodology for implementation of KANBAN system. Certain analysis on combining the simulation with this methodology is presented. The paper is concluded with a practical example which shows that through understanding the philosophy of the implementation methodology of KANBAN system and the simulation methodology, a simulation model can be created which can serve as a basis for a variety of experiments that can be conducted within a short period of time, resulting with production process optimization.

  8. Modeling and Optimizing Antennas for Rotational Spectroscopy Applications

    Directory of Open Access Journals (Sweden)

    Z. Raida

    2006-12-01

    Full Text Available In the paper, dielectric and metallic lenses are modeled and optimized in order to enhance the gain of a horn antenna in the frequency range from 60 GHz to 100 GHz. Properties of designed lenses are compared and discussed. The structures are modeled in CST Microwave Studio and optimized by Particle Swarm Optimization (PSO in order to get required antenna parameters.

  9. A Demonstration of Optimal Apodization Determination for Proper Lateral Modulation

    Science.gov (United States)

    Sumi, Chikayoshi; Komiya, Yuichi; Uga, Shinya

    2009-07-01

    We have realized effective ultrasound (US) beamformings by the steering of plural beams and apodizations for B-mode imaging with a high lateral resolution and accurate measurement of tissue or blood displacement vector and/or strain tensor using the multidimensional cross-spectrum phase gradient method (MCSPGM), or multidimensional autocorrelation or Doppler methods (MAM and MDM) using multidimensional analytic signals. For instance, the coherent superposition of the steered beams performed in the lateral cosine modulation method (LCM) has a higher potential for realizing a more accurate measurement of a displacement vector than the synthesis of the displacement vector using the accurately measured axial displacements obtained by the multidimensional synthetic aperture method (MDSAM), multidirectional transmission method (MTM) or the use of plural US transducers. Originally, the apodization function to be used for realizing a designed point spread function (PSF) was obtained by the Fraunhofer approximation (FA). However, to obtain the best approximated, designed PSF in the least-squares sense, we proposed a linear optimization (LO) method. Furthermore, on the basis of the knowledge about the losts of US energy during the propagation, we have recently developed a nonlinear optimization (NLO) method, in which the feet of the main lobes in apodization function are properly truncated. Thus, NLO also allows the decrease in the number of channels or the confinement of the effective aperture. In this study, to gain insight into the ideal shape of the PSF, the accuracies of the two-dimensional (2D) displacement vector measurements were compared for typical PSFs with distinct lateral envelope shapes, particularly, in terms of full width at half maximum (FWHM) and the length of the feet, i.e., the Gaussian function, Hanning window and parabolic function. It was confirmed that a PSF having a wide FWHM and short feet was ideal. Such a PSF yielded an echo with a high signal

  10. Optimization models for flight test scheduling

    Science.gov (United States)

    Holian, Derreck

    As threats around the world increase with nations developing new generations of warfare technology, the Unites States is keen on maintaining its position on top of the defense technology curve. This in return indicates that the U.S. military/government must research, develop, procure, and sustain new systems in the defense sector to safeguard this position. Currently, the Lockheed Martin F-35 Joint Strike Fighter (JSF) Lightning II is being developed, tested, and deployed to the U.S. military at Low Rate Initial Production (LRIP). The simultaneous act of testing and deployment is due to the contracted procurement process intended to provide a rapid Initial Operating Capability (IOC) release of the 5th Generation fighter. For this reason, many factors go into the determination of what is to be tested, in what order, and at which time due to the military requirements. A certain system or envelope of the aircraft must be assessed prior to releasing that capability into service. The objective of this praxis is to aide in the determination of what testing can be achieved on an aircraft at a point in time. Furthermore, it will define the optimum allocation of test points to aircraft and determine a prioritization of restrictions to be mitigated so that the test program can be best supported. The system described in this praxis has been deployed across the F-35 test program and testing sites. It has discovered hundreds of available test points for an aircraft to fly when it was thought none existed thus preventing an aircraft from being grounded. Additionally, it has saved hundreds of labor hours and greatly reduced the occurrence of test point reflight. Due to the proprietary nature of the JSF program, details regarding the actual test points, test plans, and all other program specific information have not been presented. Generic, representative data is used for example and proof-of-concept purposes. Apart from the data correlation algorithms, the optimization associated

  11. Ground Vehicle System Integration (GVSI) and Design Optimization Model

    National Research Council Canada - National Science Library

    Horton, William

    1996-01-01

    This report documents the Ground Vehicle System Integration (GVSI) and Design Optimization Model GVSI is a top-level analysis tool designed to support engineering tradeoff studies and vehicle design optimization efforts...

  12. An optimization model for improving highway safety

    Directory of Open Access Journals (Sweden)

    Promothes Saha

    2016-12-01

    Full Text Available This paper developed a traffic safety management system (TSMS for improving safety on county paved roads in Wyoming. TSMS is a strategic and systematic process to improve safety of roadway network. When funding is limited, it is important to identify the best combination of safety improvement projects to provide the most benefits to society in terms of crash reduction. The factors included in the proposed optimization model are annual safety budget, roadway inventory, roadway functional classification, historical crashes, safety improvement countermeasures, cost and crash reduction factors (CRFs associated with safety improvement countermeasures, and average daily traffics (ADTs. This paper demonstrated how the proposed model can identify the best combination of safety improvement projects to maximize the safety benefits in terms of reducing overall crash frequency. Although the proposed methodology was implemented on the county paved road network of Wyoming, it could be easily modified for potential implementation on the Wyoming state highway system. Other states can also benefit by implementing a similar program within their jurisdictions.

  13. Portfolio optimization by using linear programing models based on genetic algorithm

    Science.gov (United States)

    Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.

    2018-01-01

    In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.

  14. Anode baking process optimization through computer modelling

    Energy Technology Data Exchange (ETDEWEB)

    Wilburn, D.; Lancaster, D.; Crowell, B. [Noranda Aluminum, New Madrid, MO (United States); Ouellet, R.; Jiao, Q. [Noranda Technology Centre, Pointe Claire, PQ (Canada)

    1998-12-31

    Carbon anodes used in aluminum electrolysis are produced in vertical or horizontal type anode baking furnaces. The carbon blocks are formed from petroleum coke aggregate mixed with a coal tar pitch binder. Before the carbon block can be used in a reduction cell it must be heated to pyrolysis. The baking process represents a large portion of the aluminum production cost, and also has a significant effect on anode quality. To ensure that the baking of the anode is complete, it must be heated to about 1100 degrees C. To improve the understanding of the anode baking process and to improve its efficiency, a menu-driven heat, mass and fluid flow simulation tool, called NABSIM (Noranda Anode Baking SIMulation), was developed and calibrated in 1993 and 1994. It has been used since then to evaluate and screen firing practices, and to determine which firing procedure will produce the optimum heat-up rate, final temperature, and soak time, without allowing unburned tar to escape. NABSIM is used as a furnace simulation tool on a daily basis by Noranda plant process engineers and much effort is expended in improving its utility by creating new versions, and the addition of new modules. In the immediate future, efforts will be directed towards optimizing the anode baking process to improve temperature uniformity from pit to pit. 3 refs., 4 figs.

  15. Optimization Models and Methods Developed at the Energy Systems Institute

    OpenAIRE

    N.I. Voropai; V.I. Zorkaltsev

    2013-01-01

    The paper presents shortly some optimization models of energy system operation and expansion that have been created at the Energy Systems Institute of the Siberian Branch of the Russian Academy of Sciences. Consideration is given to the optimization models of energy development in Russia, a software package intended for analysis of power system reliability, and model of flow distribution in hydraulic systems. A general idea of the optimization methods developed at the Energy Systems Institute...

  16. Optimality models in the age of experimental evolution and genomics

    OpenAIRE

    Bull, J. J.; Wang, I.-N.

    2010-01-01

    Optimality models have been used to predict evolution of many properties of organisms. They typically neglect genetic details, whether by necessity or design. This omission is a common source of criticism, and although this limitation of optimality is widely acknowledged, it has mostly been defended rather than evaluated for its impact. Experimental adaptation of model organisms provides a new arena for testing optimality models and for simultaneously integrating genetics. First, an experimen...

  17. Determination of radial profile of ICF hot spot's state by multi-objective parameters optimization

    International Nuclear Information System (INIS)

    Dong Jianjun; Deng Bo; Cao Zhurong; Ding Yongkun; Jiang Shaoen

    2014-01-01

    A method using multi-objective parameters optimization is presented to determine the radial profile of hot spot temperature and density. And a parameter space which contain five variables: the temperatures at center and the interface of fuel and remain ablator, the maximum model density of remain ablator, the mass ratio of remain ablator to initial ablator and the position of interface between fuel and the remain ablator, is used to described the hot spot radial temperature and density. Two objective functions are set as the variances of normalized intensity profile from experiment X-ray images and the theory calculation. Another objective function is set as the variance of experiment average temperature of hot spot and the average temperature calculated by theoretical model. The optimized parameters are obtained by multi-objective genetic algorithm searching for the five dimension parameter space, thereby the optimized radial temperature and density profiles can be determined. The radial temperature and density profiles of hot spot by experiment data measured by KB microscope cooperating with X-ray film are presented. It is observed that the temperature profile is strongly correlated to the objective functions. (authors)

  18. Algorithm for selection of optimized EPR distance restraints for de novo protein structure determination

    Science.gov (United States)

    Kazmier, Kelli; Alexander, Nathan S.; Meiler, Jens; Mchaourab, Hassane S.

    2010-01-01

    A hybrid protein structure determination approach combining sparse Electron Paramagnetic Resonance (EPR) distance restraints and Rosetta de novo protein folding has been previously demonstrated to yield high quality models (Alexander et al., 2008). However, widespread application of this methodology to proteins of unknown structures is hindered by the lack of a general strategy to place spin label pairs in the primary sequence. In this work, we report the development of an algorithm that optimally selects spin labeling positions for the purpose of distance measurements by EPR. For the α-helical subdomain of T4 lysozyme (T4L), simulated restraints that maximize sequence separation between the two spin labels while simultaneously ensuring pairwise connectivity of secondary structure elements yielded vastly improved models by Rosetta folding. 50% of all these models have the correct fold compared to only 21% and 8% correctly folded models when randomly placed restraints or no restraints are used, respectively. Moreover, the improvements in model quality require a limited number of optimized restraints, the number of which is determined by the pairwise connectivities of T4L α-helices. The predicted improvement in Rosetta model quality was verified by experimental determination of distances between spin labels pairs selected by the algorithm. Overall, our results reinforce the rationale for the combined use of sparse EPR distance restraints and de novo folding. By alleviating the experimental bottleneck associated with restraint selection, this algorithm sets the stage for extending computational structure determination to larger, traditionally elusive protein topologies of critical structural and biochemical importance. PMID:21074624

  19. Models and Methods for Free Material Optimization

    DEFF Research Database (Denmark)

    Weldeyesus, Alemseged Gebrehiwot

    Free Material Optimization (FMO) is a powerful approach for structural optimization in which the design parametrization allows the entire elastic stiffness tensor to vary freely at each point of the design domain. The only requirement imposed on the stiffness tensor lies on its mild necessary...

  20. Determination of optimal self-drive tourism route using the orienteering problem method

    Science.gov (United States)

    Hashim, Zakiah; Ismail, Wan Rosmanira; Ahmad, Norfaieqah

    2013-04-01

    This paper was conducted to determine the optimal travel routes for self-drive tourism based on the allocation of time and expense by maximizing the amount of attraction scores assigned to each city involved. Self-drive tourism represents a type of tourism where tourists hire or travel by their own vehicle. It only involves a tourist destination which can be linked with a network of roads. Normally, the traveling salesman problem (TSP) and multiple traveling salesman problems (MTSP) method were used in the minimization problem such as determination the shortest time or distance traveled. This paper involved an alternative approach for maximization method which is maximize the attraction scores and tested on tourism data for ten cities in Kedah. A set of priority scores are used to set the attraction score at each city. The classical approach of the orienteering problem was used to determine the optimal travel route. This approach is extended to the team orienteering problem and the two methods were compared. These two models have been solved by using LINGO12.0 software. The results indicate that the model involving the team orienteering problem provides a more appropriate solution compared to the orienteering problem model.

  1. Transport Routes Optimization Model Through Application of Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    Ivan Bortas

    2018-03-01

    Full Text Available The transport policy of the European Union is based on the mission of restructuring road traffic into other and energy-favourable transport modes which have not been sufficiently represented yet. Therefore, the development of the inland waterway and rail transport, and connectivity in the intermodal transport network are development planning priorities of the European transport strategy. The aim of this research study was to apply the scientific methodology and thus analyse the factors that affect the distribution of the goods flows and by using the fuzzy logic to make an optimization model, according to the criteria of minimizing the costs and negative impact on the environment, for the selection of the optimal transport route. Testing of the model by simulation, was performed on the basis of evaluating the criteria of the influential parameters with unprecise and indefinite input parameters. The testing results show that by the distribution of the goods flow from road transport network to inland waterways or rail transport, can be predicted in advance and determine the transport route with optimal characteristics. The results of the performed research study will be used to improve the process of planning the transport service, with the aim of reducing the transport costs and environmental pollution.

  2. Topologically determined optimal stochastic resonance responses of spatially embedded networks

    International Nuclear Information System (INIS)

    Gosak, Marko; Marhl, Marko; Korosak, Dean

    2011-01-01

    We have analyzed the stochastic resonance phenomenon on spatial networks of bistable and excitable oscillators, which are connected according to their location and the amplitude of external forcing. By smoothly altering the network topology from a scale-free (SF) network with dominating long-range connections to a network where principally only adjacent oscillators are connected, we reveal that besides an optimal noise intensity, there is also a most favorable interaction topology at which the best correlation between the response of the network and the imposed weak external forcing is achieved. For various distributions of the amplitudes of external forcing, the optimal topology is always found in the intermediate regime between the highly heterogeneous SF network and the strong geometric regime. Our findings thus indicate that a suitable number of hubs and with that an optimal ratio between short- and long-range connections is necessary in order to obtain the best global response of a spatial network. Furthermore, we link the existence of the optimal interaction topology to a critical point indicating the transition from a long-range interactions-dominated network to a more lattice-like network structure.

  3. Use of Simplex Method in Determination of Optimal Rational ...

    African Journals Online (AJOL)

    The optimal rational composition was found to be: Nsu Clay = 47.8%, quartz = 33.7% and CaCO3 = 18.5%. The other clay from Ukpor was found unsuitable at the firing temperature (l000°C) used. It showed bending strength lower than the standard requirement for all compositions studied. To improve the strength an ...

  4. Visual prosthesis wireless energy transfer system optimal modeling.

    Science.gov (United States)

    Li, Xueping; Yang, Yuan; Gao, Yong

    2014-01-16

    Wireless energy transfer system is an effective way to solve the visual prosthesis energy supply problems, theoretical modeling of the system is the prerequisite to do optimal energy transfer system design. On the basis of the ideal model of the wireless energy transfer system, according to visual prosthesis application condition, the system modeling is optimized. During the optimal modeling, taking planar spiral coils as the coupling devices between energy transmitter and receiver, the effect of the parasitic capacitance of the transfer coil is considered, and especially the concept of biological capacitance is proposed to consider the influence of biological tissue on the energy transfer efficiency, resulting in the optimal modeling's more accuracy for the actual application. The simulation data of the optimal model in this paper is compared with that of the previous ideal model, the results show that under high frequency condition, the parasitic capacitance of inductance and biological capacitance considered in the optimal model could have great impact on the wireless energy transfer system. The further comparison with the experimental data verifies the validity and accuracy of the optimal model proposed in this paper. The optimal model proposed in this paper has a higher theoretical guiding significance for the wireless energy transfer system's further research, and provide a more precise model reference for solving the power supply problem in visual prosthesis clinical application.

  5. Optimized numerical annular flow dryout model using the drift-flux model in tube geometry

    International Nuclear Information System (INIS)

    Chun, Ji Han; Lee, Un Chul

    2008-01-01

    Many experimental analyses for annular film dryouts, which is one of the Critical Heat Flux (CHF) mechanisms, have been performed because of their importance. Numerical approaches must also be developed in order to assess the results from experiments and to perform pre-tests before experiments. Various thermal-hydraulic codes, such as RELAP, COBRATF, MARS, etc., have been used in the assessment of the results of dryout experiments and in experimental pre-tests. These thermal-hydraulic codes are general tools intended for the analysis of various phenomena that could appear in nuclear power plants, and many models applying these codes are unnecessarily complex for the focused analysis of dryout phenomena alone. In this study, a numerical model was developed for annular film dryout using the drift-flux model from uniform heated tube geometry. Several candidates of models that strongly affect dryout, such as the entrainment model, deposition model, and the criterion for the dryout point model, were tested as candidates for inclusion in an optimized annular film dryout model. The optimized model was developed by adopting the best combination of these candidate models, as determined through comparison with experimental data. This optimized model showed reasonable results, which were better than those of MARS code

  6. Polymer models with optimal good-solvent behavior

    Science.gov (United States)

    D'Adamo, Giuseppe; Pelissetto, Andrea

    2017-11-01

    We consider three different continuum polymer models, which all depend on a tunable parameter r that determines the strength of the excluded-volume interactions. In the first model, chains are obtained by concatenating hard spherocylinders of height b and diameter rb (we call them thick self-avoiding chains). The other two models are generalizations of the tangent hard-sphere and of the Kremer-Grest models. We show that for a specific value r* , all models show optimal behavior: asymptotic long-chain behavior is observed for relatively short chains. For r < r* , instead, the behavior can be parametrized by using the two-parameter model, which also describes the thermal crossover close to the θ point. The bonds of the thick self-avoiding chains cannot cross each other, and therefore the model is suited for the investigation of topological properties and for dynamical studies. Such a model also provides a coarse-grained description of double-stranded DNA, so that we can use our results to discuss under which conditions DNA can be considered as a model good-solvent polymer.

  7. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los; Schö nlieb, Carola-Bibiane

    2013-01-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  8. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  9. Optimal velocity difference model for a car-following theory

    International Nuclear Information System (INIS)

    Peng, G.H.; Cai, X.H.; Liu, C.Q.; Cao, B.F.; Tuo, M.X.

    2011-01-01

    In this Letter, we present a new optimal velocity difference model for a car-following theory based on the full velocity difference model. The linear stability condition of the new model is obtained by using the linear stability theory. The unrealistically high deceleration does not appear in OVDM. Numerical simulation of traffic dynamics shows that the new model can avoid the disadvantage of negative velocity occurred at small sensitivity coefficient λ in full velocity difference model by adjusting the coefficient of the optimal velocity difference, which shows that collision can disappear in the improved model. -- Highlights: → A new optimal velocity difference car-following model is proposed. → The effects of the optimal velocity difference on the stability of traffic flow have been explored. → The starting and braking process were carried out through simulation. → The effects of the optimal velocity difference can avoid the disadvantage of negative velocity.

  10. DETERMINATION OF THE OPTIMAL CAPITAL INVESTMENTS TO ENSURE THE SUSTAINABLE DEVELOPMENT OF THE RAILWAY

    Directory of Open Access Journals (Sweden)

    O. I. Kharchenko

    2015-04-01

    Full Text Available Purpose. Every year more attention is paid for the theoretical and practical issue of sustainable development of railway transport. But today the mechanisms of financial support of this development are poorly understood. Therefore, the aim of this article is to determine the optimal investment allocation to ensure sustainable development of the railway transport on the example of State Enterprise «Prydniprovsk Railway» and the creation of preconditions for the mathematical model development. Methodology. The ensuring task for sustainable development of railway transport is solved on the basis of the integral indicator of sustainable development effectiveness and defined as the maximization of this criterion. The optimization of measures technological and technical characters are proposed to carry out for increasing values of the integral performance measure components. To the optimization activities of technological nature that enhance the performance criteria belongs: optimization of the number of train and shunting locomotives, optimization of power handling mechanisms at the stations, optimization of routes of train flows. The activities related to the technical nature include: modernization of railways in the direction of their electrification and modernization of the running gear and coupler drawbars of rolling stock, as well as means of separators mechanization at stations to reduce noise impacts on the environment. Findings. The work resulted in the optimal allocation of investments to ensure the sustainable development of railway transportation of State Enterprise «Prydniprovsk Railway». This allows providing such kind of railway development when functioning of State Enterprise «Prydniprovsk Railway» is characterized by a maximum value of the integral indicator of efficiency. Originality. The work was reviewed and the new approach was proposed to determine the optimal allocation of capital investments to ensure sustainable

  11. Optimized computational method for determining the beta dose distribution using a multiple-element thermoluminescent dosimeter system

    International Nuclear Information System (INIS)

    Shen, L.; Levine, S.H.; Catchen, G.L.

    1987-01-01

    This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration

  12. Optimal consumption problem in the Vasicek model

    Directory of Open Access Journals (Sweden)

    Jakub Trybuła

    2015-01-01

    Full Text Available We consider the problem of an optimal consumption strategy on the infinite time horizon based on the hyperbolic absolute risk aversion utility when the interest rate is an Ornstein-Uhlenbeck process. Using the method of subsolution and supersolution we obtain the existence of solutions of the dynamic programming equation. We illustrate the paper with a numerical example of the optimal consumption strategy and the value function.

  13. Parametrical Method for Determining Optimal Ship Carrying Capacity and Performance of Handling Equipment

    Directory of Open Access Journals (Sweden)

    Michalski Jan P.

    2016-04-01

    Full Text Available The paper presents a method of evaluating the optimal value of the cargo ships deadweight and the coupled optimal value of cargo handling capacity. The method may be useful at the stage of establishing the main owners requirements concerning the ship design parameters as well as for choosing a proper second hand ship for a given transportation task. The deadweight and the capacity are determined on the basis of a selected economic measure of the transport effectiveness of ship – the Required Freight Rate. The mathematical model of the problem is of a deterministic character and the simplifying assumptions are justified for ships operating in the liner trade. The assumptions are so selected that solution of the problem is obtained in analytical closed form. The presented method can be useful for application in the preliminary ship design or in the simulation of pre-investment transportation task studies.

  14. Determination of Pareto frontier in multi-objective maintenance optimization

    International Nuclear Information System (INIS)

    Certa, Antonella; Galante, Giacomo; Lupo, Toni; Passannanti, Gianfranco

    2011-01-01

    The objective of a maintenance policy generally is the global maintenance cost minimization that involves not only the direct costs for both the maintenance actions and the spare parts, but also those ones due to the system stop for preventive maintenance and the downtime for failure. For some operating systems, the failure event can be dangerous so that they are asked to operate assuring a very high reliability level between two consecutive fixed stops. The present paper attempts to individuate the set of elements on which performing maintenance actions so that the system can assure the required reliability level until the next fixed stop for maintenance, minimizing both the global maintenance cost and the total maintenance time. In order to solve the previous constrained multi-objective optimization problem, an effective approach is proposed to obtain the best solutions (that is the Pareto optimal frontier) among which the decision maker will choose the more suitable one. As well known, describing the whole Pareto optimal frontier generally is a troublesome task. The paper proposes an algorithm able to rapidly overcome this problem and its effectiveness is shown by an application to a case study regarding a complex series-parallel system.

  15. Optimization model for the design of distributed wastewater treatment networks

    Directory of Open Access Journals (Sweden)

    Ibrić Nidret

    2012-01-01

    Full Text Available In this paper we address the synthesis problem of distributed wastewater networks using mathematical programming approach based on the superstructure optimization. We present a generalized superstructure and optimization model for the design of the distributed wastewater treatment networks. The superstructure includes splitters, treatment units, mixers, with all feasible interconnections including water recirculation. Based on the superstructure the optimization model is presented. The optimization model is given as a nonlinear programming (NLP problem where the objective function can be defined to minimize the total amount of wastewater treated in treatment operations or to minimize the total treatment costs. The NLP model is extended to a mixed integer nonlinear programming (MINLP problem where binary variables are used for the selection of the wastewater treatment technologies. The bounds for all flowrates and concentrations in the wastewater network are specified as general equations. The proposed models are solved using the global optimization solvers (BARON and LINDOGlobal. The application of the proposed models is illustrated on the two wastewater network problems of different complexity. First one is formulated as the NLP and the second one as the MINLP. For the second one the parametric and structural optimization is performed at the same time where optimal flowrates, concentrations as well as optimal technologies for the wastewater treatment are selected. Using the proposed model both problems are solved to global optimality.

  16. A model for optimizing the production of pharmaceutical products

    Directory of Open Access Journals (Sweden)

    Nevena Gospodinova

    2017-05-01

    Full Text Available The problem associated with the optimal production planning is especially relevant in modern industrial enterprises. The most commonly used optimality criteria in this context are: maximizing the total profit; minimizing the cost per unit of production; maximizing the capacity utilization; minimizing the total production costs. This article aims to explore the possibility for optimizing the production of pharmaceutical products through the construction of a mathematical model that can be viewed in two ways – as a single-product model and a multi-product model. As an optimality criterion it is set the minimization of the cost per unit of production for a given planning period. The author proposes an analytical method for solving the nonlinear optimization problem. An optimal production plan of Tylosin tartrate is found using the single-product model.

  17. Hierarchical models and iterative optimization of hybrid systems

    Energy Technology Data Exchange (ETDEWEB)

    Rasina, Irina V. [Ailamazyan Program Systems Institute, Russian Academy of Sciences, Peter One str. 4a, Pereslavl-Zalessky, 152021 (Russian Federation); Baturina, Olga V. [Trapeznikov Control Sciences Institute, Russian Academy of Sciences, Profsoyuznaya str. 65, 117997, Moscow (Russian Federation); Nasatueva, Soelma N. [Buryat State University, Smolina str.24a, Ulan-Ude, 670000 (Russian Federation)

    2016-06-08

    A class of hybrid control systems on the base of two-level discrete-continuous model is considered. The concept of this model was proposed and developed in preceding works as a concretization of the general multi-step system with related optimality conditions. A new iterative optimization procedure for such systems is developed on the base of localization of the global optimality conditions via contraction the control set.

  18. A MATHEMATICAL MODEL OF OPTIMIZATION OF THE VOLUME OF MATERIAL FLOWS IN GRAIN PROCESSING INTEGRATED PRODUCTION SYSTEMS

    OpenAIRE

    Baranovskaya T. P.; Loyko V. I.; Makarevich O. A.; Bogoslavskiy S. N.

    2014-01-01

    The article suggests a mathematical model of optimization of the volume of material flows: the model for the ideal conditions; the model for the working conditions; generalized model of determining the optimal input parameters. These models optimize such parameters of inventory management in technology-integrated grain production systems, as the number of cycles supply, the volume of the source material and financial flows. The study was carried out on the example of the integrated system of ...

  19. Constrained optimization via simulation models for new product innovation

    Science.gov (United States)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  20. Empty tracks optimization based on Z-Map model

    Science.gov (United States)

    Liu, Le; Yan, Guangrong; Wang, Zaijun; Zang, Genao

    2017-12-01

    For parts with many features, there are more empty tracks during machining. If these tracks are not optimized, the machining efficiency will be seriously affected. In this paper, the characteristics of the empty tracks are studied in detail. Combining with the existing optimization algorithm, a new tracks optimization method based on Z-Map model is proposed. In this method, the tool tracks are divided into the unit processing section, and then the Z-Map model simulation technique is used to analyze the order constraint between the unit segments. The empty stroke optimization problem is transformed into the TSP with sequential constraints, and then through the genetic algorithm solves the established TSP problem. This kind of optimization method can not only optimize the simple structural parts, but also optimize the complex structural parts, so as to effectively plan the empty tracks and greatly improve the processing efficiency.

  1. Optimal hedging with the cointegrated vector autoregressive model

    DEFF Research Database (Denmark)

    Gatarek, Lukasz; Johansen, Søren

    We derive the optimal hedging ratios for a portfolio of assets driven by a Coin- tegrated Vector Autoregressive model (CVAR) with general cointegration rank. Our hedge is optimal in the sense of minimum variance portfolio. We consider a model that allows for the hedges to be cointegrated with the...

  2. Stochastic Robust Mathematical Programming Model for Power System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  3. Systemic Model for Optimal Regulation in Public Service

    Directory of Open Access Journals (Sweden)

    Lucica Matei

    2006-05-01

    Full Text Available The current paper inscribes within those approaching the issue of public services from the interdisciplinary perspective. Public service development and imposing standards of efficiency and effectiveness, as well as for citizens’ satisfaction bring in front line the systemic modelling and establishing optimal policies for organisation and functioning of public services. The issue under discussion imposes an interface with powerful determinations of social nature. Consequently, the most adequate modelling might be that with a probabilistic and statistic nature. The fundamental idea of this paper, that obviously can be broadly developed, starts with assimilating the way of organisation and functioning of a public service with a waiting thread, to which some hypotheses are associated concerning the order of provision, performance measurement through costs or waiting time in the system etc. We emphasise the openness and dynamics of the public service system, as well as modelling by turning into account the statistic knowledge and researches, and we do not make detailed remarks on the cybernetic characteristics of this system. The optimal adjustment is achieved through analysis on the feedback and its comparison with the current standards or good practices.

  4. Optimized combination model and algorithm of parking guidance information configuration

    Directory of Open Access Journals (Sweden)

    Tian Ye

    2011-01-01

    Full Text Available Abstract Operators of parking guidance and information (PGI systems often have difficulty in providing the best car park availability information to drivers in periods of high demand. A new PGI configuration model based on the optimized combination method was proposed by analyzing of parking choice behavior. This article first describes a parking choice behavioral model incorporating drivers perceptions of waiting times at car parks based on PGI signs. This model was used to predict the influence of PGI signs on the overall performance of the traffic system. Then relationships were developed for estimating the arrival rates at car parks based on driver characteristics, car park attributes as well as the car park availability information displayed on PGI signs. A mathematical program was formulated to determine the optimal display PGI sign configuration to minimize total travel time. A genetic algorithm was used to identify solutions that significantly reduced queue lengths and total travel time compared with existing practices. These procedures were applied to an existing PGI system operating in Deqing Town and Xiuning City. Significant reductions in total travel time of parking vehicles with PGI being configured. This would reduce traffic congestion and lead to various environmental benefits.

  5. Optimizing Cardiovascular Benefits of Exercise: A Review of Rodent Models

    Science.gov (United States)

    Davis, Brittany; Moriguchi, Takeshi; Sumpio, Bauer

    2013-01-01

    Although research unanimously maintains that exercise can ward off cardiovascular disease (CVD), the optimal type, duration, intensity, and combination of forms are yet not clear. In our review of existing rodent-based studies on exercise and cardiovascular health, we attempt to find the optimal forms, intensities, and durations of exercise. Using Scopus and Medline, a literature review of English language comparative journal studies of cardiovascular benefits and exercise was performed. This review examines the existing literature on rodent models of aerobic, anaerobic, and power exercise and compares the benefits of various training forms, intensities, and durations. The rodent studies reviewed in this article correlate with reports on human subjects that suggest regular aerobic exercise can improve cardiac and vascular structure and function, as well as lipid profiles, and reduce the risk of CVD. Findings demonstrate an abundance of rodent-based aerobic studies, but a lack of anaerobic and power forms of exercise, as well as comparisons of these three components of exercise. Thus, further studies must be conducted to determine a truly optimal regimen for cardiovascular health. PMID:24436579

  6. Modeling and optimization of an electric power distribution network ...

    African Journals Online (AJOL)

    Modeling and optimization of an electric power distribution network planning system using ... of the network was modelled with non-linear mathematical expressions. ... given feasible locations, re-conductoring of existing feeders in the network, ...

  7. Integrated modeling of ozonation for optimization of drinking water treatment

    NARCIS (Netherlands)

    van der Helm, A.W.C.

    2007-01-01

    Drinking water treatment plants automation becomes more sophisticated, more on-line monitoring systems become available and integration of modeling environments with control systems becomes easier. This gives possibilities for model-based optimization. In operation of drinking water treatment

  8. Optimization method to determine mass transfer variables in a PWR crud deposition risk assessment tool

    International Nuclear Information System (INIS)

    Do, Chuong; Hussey, Dennis; Wells, Daniel M.; Epperson, Kenny

    2016-01-01

    Optimization numerical method was implemented to determine several mass transfer coefficients in a crud-induced power shift risk assessment code. The approach was to utilize a multilevel strategy that targets different model parameters that first changes the major order variables, mass transfer inputs, then calibrates the minor order variables, crud source terms, according to available plant data. In this manner, the mass transfer inputs are effectively simplified as 'dependent' on the crud source terms. Two optimization studies were performed using DAKOTA, a design and analysis toolkit, with the difference between the runs, being the number of model runs using BOA, allowed for adjusting the crud source terms, therefore, reducing the uncertainty with calibration. The result of the first case showed that the current best estimated values for the mass transfer coefficients, which were derived from first principle analysis, can be considered an optimized set. When the run limit of BOA was increased for the second case, an improvement in the prediction was obtained with the results deviating slightly from the best estimated values. (author)

  9. Optimization Model for Headway of a Suburban Bus Route

    Directory of Open Access Journals (Sweden)

    Xiaohong Jiang

    2014-01-01

    Full Text Available Due to relatively low passenger demand, headways of suburban bus route are usually longer than those of urban bus route. Actually it is also difficult to balance the benefits between passengers and operators, subject to the service standards from the government. Hence the headway of a suburban bus route is usually determined on the empirical experience of transport planners. To cope with this problem, this paper proposes an optimization model for designing the headways of suburban bus routes by minimizing the operating and user costs. The user costs take into account both the waiting time cost and the crowding cost. The feasibility and validity of the proposed model are shown by applying it to the Route 206 in Jiangning district, Nanjing city of China. Weightages of passengers’ cost and operating cost are further discussed, considering different passenger flows. It is found that the headway and objective function are affected by the weightages largely.

  10. Shape optimization in biomimetics by homogenization modelling

    International Nuclear Information System (INIS)

    Hoppe, Ronald H.W.; Petrova, Svetozara I.

    2003-08-01

    Optimal shape design of microstructured materials has recently attracted a great deal of attention in material science. The shape and the topology of the microstructure have a significant impact on the macroscopic properties. The present work is devoted to the shape optimization of new biomorphic microcellular ceramics produced from natural wood by biotemplating. We are interested in finding the best material-and-shape combination in order to achieve the optimal prespecified performance of the composite material. The computation of the effective material properties is carried out using the homogenization method. Adaptive mesh-refinement technique based on the computation of recovered stresses is applied in the microstructure to find the homogenized elasticity coefficients. Numerical results show the reliability of the implemented a posteriori error estimator. (author)

  11. A tutorial on fundamental model structures for railway timetable optimization

    DEFF Research Database (Denmark)

    Harrod, Steven

    2012-01-01

    This guide explains the role of railway timetables relative to all other railway scheduling activities, and then presents four fundamental timetable formulations suitable for optimization. Timetabling models may be classified according to whether they explicitly model the track structure, and whe......This guide explains the role of railway timetables relative to all other railway scheduling activities, and then presents four fundamental timetable formulations suitable for optimization. Timetabling models may be classified according to whether they explicitly model the track structure...

  12. Determinants of Optimal Adherence to Antiretroviral Therapy among ...

    African Journals Online (AJOL)

    SITWALA COMPUTERS

    medication side effects and adolescence were associated with non-adherence (p ... especially the social determinants of health surrounding ... irrespective of their CD4 cell count. ..... reported were cell phone alarm, radio news hour time, or a.

  13. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  14. Modeling groundwater vulnerability to pollution using Optimized DRASTIC model

    International Nuclear Information System (INIS)

    Mogaji, Kehinde Anthony; Lim, Hwee San; Abdullar, Khiruddin

    2014-01-01

    The prediction accuracy of the conventional DRASTIC model (CDM) algorithm for groundwater vulnerability assessment is severely limited by the inherent subjectivity and uncertainty in the integration of data obtained from various sources. This study attempts to overcome these problems by exploring the potential of the analytic hierarchy process (AHP) technique as a decision support model to optimize the CDM algorithm. The AHP technique was utilized to compute the normalized weights for the seven parameters of the CDM to generate an optimized DRASTIC model (ODM) algorithm. The DRASTIC parameters integrated with the ODM algorithm predicted which among the study areas is more likely to become contaminated as a result of activities at or near the land surface potential. Five vulnerability zones, namely: no vulnerable(NV), very low vulnerable (VLV), low vulnerable (LV), moderate vulnerable (MV) and high vulnerable (HV) were identified based on the vulnerability index values estimated with the ODM algorithm. Results show that more than 50% of the area belongs to both moderate and high vulnerable zones on the account of the spatial analysis of the produced ODM-based groundwater vulnerability prediction map (GVPM).The prediction accuracy of the ODM-based – GVPM with the groundwater pH and manganese (Mn) concentrations established correlation factors (CRs) result of 90 % and 86 % compared to the CRs result of 62 % and 50 % obtained for the validation accuracy of the CDM – based GVPM. The comparative results, indicated that the ODM-based produced GVPM is more reliable than the CDM – based produced GVPM in the study area. The study established the efficacy of AHP as a spatial decision support technique in enhancing environmental decision making with particular reference to future groundwater vulnerability assessment

  15. Optimization and emergence in marine ecosystem models

    DEFF Research Database (Denmark)

    Mariani, Patrizio; Visser, Andre

    2010-01-01

    Ingestion rates and mortality rates of zooplankton are dynamic parameters reflecting a behavioural trade-off between encounters with food and predators. An evolutionarily consistent behaviour is that which optimizes the trade-off in terms of the fitness conferred to an individual. We argue that i...

  16. CREATION OF OPTIMIZATION MODEL OF STEAM BOILER RECUPERATIVE AIR HEATER

    Directory of Open Access Journals (Sweden)

    N. B. Carnickiy

    2006-01-01

    Full Text Available The paper proposes to use a mathematical modeling as one of the ways intended to improve quality of recuperative air heater design (RAH without significant additional costs, connected with the change of design materials or fuel type. The described conceptual mathematical AHP optimization model of RAH consists of optimized and constant parameters, technical limitations and optimality criteria.The paper considers a methodology for search of design and regime parameters of an air heater which is based on the methods of multi-criteria optimization. Conclusions for expediency of the given approach usage are made in the paper.

  17. Optimization Research of Generation Investment Based on Linear Programming Model

    Science.gov (United States)

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  18. Optimizing the radioimmunologic determination methods for cortisol and calcitonin

    International Nuclear Information System (INIS)

    Stalla, G.

    1981-01-01

    In order to build up a specific 125-iodine cortisol radioimmunoassay (RIA) pure cortisol-3(0-carbodxymethyl) oxim was synthesized for teh production of antigens and tracers. The cortisol was coupled with tyrosin methylester and then labelled with 125-iodine. For the antigen production the cortisol derivate was coupled with the same method to thyreoglobulin. The major part of the antisera, which were obtained like this, presented high titres. Apart from a high specificity for cortisol a high affinity was found in the acid pH-area and quantified with a particularly developed computer program. An extractive step in the cortisol RIA could be prevented by efforts. The assay was carried out with an optimized double antibody principle: The reaction time between the first and the second antiserum was considerably accelerated by the administration of polyaethylenglycol. The assay can be carried out automatically by applying a modular analysis system, which operates fast and provides a large capacity. The required quality and accuracy controls were done. The comparison of this assay with other cortisol-RIA showed good correlation. The RIA for human clacitonin was improved. For separating bound and freely mobile hormones the optimized double-antibody technique was applied. The antiserum was examined with respect to its affinity to calcitonin. For the 'zero serum' production the Florisil extraction method was used. The criteria of the quality and accuracy controls were complied. Significantly increased calcitonin concentrations were found in a patient group with medullar thyroid carcinoma and in two patients with an additional phaechromocytoma. (orig./MG) [de

  19. Correlations in state space can cause sub-optimal adaptation of optimal feedback control models

    OpenAIRE

    Aprasoff, Jonathan; Donchin, Opher

    2011-01-01

    Control of our movements is apparently facilitated by an adaptive internal model in the cerebellum. It was long thought that this internal model implemented an adaptive inverse model and generated motor commands, but recently many reject that idea in favor of a forward model hypothesis. In theory, the forward model predicts upcoming state during reaching movements so the motor cortex can generate appropriate motor commands. Recent computational models of this process rely on the optimal feedb...

  20. Optimizing model. 1. Insemination, replacement, seasonal production and cash flow.

    NARCIS (Netherlands)

    Delorenzo, M.A.; Spreen, T.H.; Bryan, G.R.; Beede, D.K.; Arendonk, van J.A.M.

    1992-01-01

    Dynamic programming to solve the Markov decision process problem of optimal insemination and replacement decisions was adapted to address large dairy herd management decision problems in the US. Expected net present values of cow states (151,200) were used to determine the optimal policy. States

  1. Optimization and validation of Folin-Ciocalteu method for the determination of total polyphenol content of Pu-erh tea.

    Science.gov (United States)

    Musci, Marilena; Yao, Shicong

    2017-12-01

    Pu-erh tea is a post-fermented tea that has recently gained popularity worldwide, due to potential health benefits related to the antioxidant activity resulting from its high polyphenolic content. The Folin-Ciocalteu method is a simple, rapid, and inexpensive assay widely applied for the determination of total polyphenol content. Over the past years, it has been subjected to many modifications, often without any systematic optimization or validation. In our study, we sought to optimize the Folin-Ciocalteu method, evaluate quality parameters including linearity, precision and stability, and then apply the optimized model to determine the total polyphenol content of 57 Chinese teas, including green tea, aged and ripened Pu-erh tea. Our optimized Folin-Ciocalteu method reduced analysis time, allowed for the analysis of a large number of samples, to discriminate among the different teas, and to assess the effect of the post-fermentation process on polyphenol content.

  2. Application of genetic algorithm in radio ecological models parameter determination

    Energy Technology Data Exchange (ETDEWEB)

    Pantelic, G. [Institute of Occupatioanl Health and Radiological Protection ' Dr Dragomir Karajovic' , Belgrade (Serbia)

    2006-07-01

    The method of genetic algorithms was used to determine the biological half-life of 137 Cs in cow milk after the accident in Chernobyl. Methodologically genetic algorithms are based on the fact that natural processes tend to optimize themselves and therefore this method should be more efficient in providing optimal solutions in the modeling of radio ecological and environmental events. The calculated biological half-life of 137 Cs in milk is (32 {+-} 3) days and transfer coefficient from grass to milk is (0.019 {+-} 0.005). (authors)

  3. Application of genetic algorithm in radio ecological models parameter determination

    International Nuclear Information System (INIS)

    Pantelic, G.

    2006-01-01

    The method of genetic algorithms was used to determine the biological half-life of 137 Cs in cow milk after the accident in Chernobyl. Methodologically genetic algorithms are based on the fact that natural processes tend to optimize themselves and therefore this method should be more efficient in providing optimal solutions in the modeling of radio ecological and environmental events. The calculated biological half-life of 137 Cs in milk is (32 ± 3) days and transfer coefficient from grass to milk is (0.019 ± 0.005). (authors)

  4. Model for economic optimization of solar power plants; Beraekningsmodell foer ekonomisk optimering av solelanlaeggningar

    Energy Technology Data Exchange (ETDEWEB)

    Widen, Joakim

    2011-01-15

    This is the final report from a project in which an early-design-phase tool for photovoltaic (PV) systems has been developed. The aim of the tool is to provide a quick and easy way to estimate the electricity production and the economy of a PV system. Although it is effective and easy to use, the model takes into account all the important factors that affect the design, performance and economy of a system, and makes it possible with more in-depth analyses as well. The intended users of the tool are both electricity end-users thinking on investing in a small-scale system and large investors planning in an early project phase for large-scale PV systems. The developed tool is a simulation tool rather than an optimization tool. However, as the model is efficient and simple to use, it is easy to vary parameters and input data in different scenarios to arrive at an optimal solution. In order for the tool to realistically estimate the load matching of a PV system, which depends on seasonal and diurnal variations in both load and production profiles, the computations are made on an hourly basis. An hourly resolution is the most common one in meteorological data and to increase the resolution further is neither practically possible nor required for accuracy. The hourly irradiation data used in the model were collected from the publicly available STRAaNG database, which is maintained by the Swedish Meteorological and Hydrological Institute (SMHI). Idealized hourly load profiles for typical Swedish end-user categories are also included in the tool. A general computational model was implemented in Matlab, which provided easy testing, visualization and validation of the model. The computations involved can be summarized in four main steps: 1 Radiation computations. This involves a transposition of radiation components to the tilted plane of the PV array. The model takes the orientation of the system into account and uses assumed albedo values of the surroundings to add ground

  5. Determination of the optimal number of components in independent components analysis.

    Science.gov (United States)

    Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N

    2018-03-01

    Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Portfolio optimization for index tracking modelling in Malaysia stock market

    Science.gov (United States)

    Siew, Lam Weng; Jaaman, Saiful Hafizah; Ismail, Hamizun

    2016-06-01

    Index tracking is an investment strategy in portfolio management which aims to construct an optimal portfolio to generate similar mean return with the stock market index mean return without purchasing all of the stocks that make up the index. The objective of this paper is to construct an optimal portfolio using the optimization model which adopts regression approach in tracking the benchmark stock market index return. In this study, the data consists of weekly price of stocks in Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index from January 2010 until December 2013. The results of this study show that the optimal portfolio is able to track FBMKLCI Index at minimum tracking error of 1.0027% with 0.0290% excess mean return over the mean return of FBMKLCI Index. The significance of this study is to construct the optimal portfolio using optimization model which adopts regression approach in tracking the stock market index without purchasing all index components.

  7. In vitro placental model optimization for nanoparticle transport studies

    Directory of Open Access Journals (Sweden)

    Cartwright L

    2012-01-01

    Full Text Available Laura Cartwright1, Marie Sønnegaard Poulsen2, Hanne Mørck Nielsen3, Giulio Pojana4, Lisbeth E Knudsen2, Margaret Saunders1, Erik Rytting2,51Bristol Initiative for Research of Child Health (BIRCH, Biophysics Research Unit, St Michael's Hospital, UH Bristol NHS Foundation Trust, Bristol, UK; 2University of Copenhagen, Faculty of Health Sciences, Department of Public Health, 3University of Copenhagen, Faculty of Pharmaceutical Sciences, Department of Pharmaceutics and Analytical Chemistry, Copenhagen, Denmark; 4Department of Environmental Sciences, Informatics and Statistics, University Ca' Foscari Venice, Venice, Italy; 5Department of Obstetrics and Gynecology, University of Texas Medical Branch, Galveston, Texas, USABackground: Advances in biomedical nanotechnology raise hopes in patient populations but may also raise questions regarding biodistribution and biocompatibility, especially during pregnancy. Special consideration must be given to the placenta as a biological barrier because a pregnant woman's exposure to nanoparticles could have significant effects on the fetus developing in the womb. Therefore, the purpose of this study is to optimize an in vitro model for characterizing the transport of nanoparticles across human placental trophoblast cells.Methods: The growth of BeWo (clone b30 human placental choriocarcinoma cells for nanoparticle transport studies was characterized in terms of optimized Transwell® insert type and pore size, the investigation of barrier properties by transmission electron microscopy, tight junction staining, transepithelial electrical resistance, and fluorescein sodium transport. Following the determination of nontoxic concentrations of fluorescent polystyrene nanoparticles, the cellular uptake and transport of 50 nm and 100 nm diameter particles was measured using the in vitro BeWo cell model.Results: Particle size measurements, fluorescence readings, and confocal microscopy indicated both cellular uptake of

  8. Determining optimal pinger spacing for harbour porpoise bycatch mitigation

    DEFF Research Database (Denmark)

    Larsen, Finn; Krog, Carsten; Eigaard, Ole Ritzau

    2013-01-01

    A trial was conducted in the Danish North Sea hake gillnet fishery in July to September 2006 to determine whether the spacing of the Aquatec AQUAmark100 pinger could be increased without reducing the effectiveness of the pinger in mitigating harbour porpoise bycatch. The trial was designed as a c...

  9. Determining the optimal mix of federal and contract fire crews: a case study from the Pacific Northwest.

    Science.gov (United States)

    Geoffrey H. Donovan

    2006-01-01

    Federal land management agencies in the United States are increasingly relying on contract crews as opposed to agency fire crews. Despite this increasing reliance on contractors, there have been no studies to determine what the optimal mix of contract and agency fire crews should be. A mathematical model is presented to address this question and is applied to a case...

  10. An optimal control model of crop thinning in viticulture

    OpenAIRE

    Schamel Guenter H.; Schubert Stefan F.

    2016-01-01

    We develop an economic model of cluster thinning in viticulture to control for grape quantity harvested and grape quality, applying a simple optimal control model with the aim to raise grape quality and related economic profits. The model maximizes vineyard owner profits and allows to discuss two relevant scenarios using a phase diagram analysis: (1) when the initial grape quantity is sufficiently small, thinning grapes will not be optimal and (2) when the initial grape quantity is high enoug...

  11. Flow injection analysis simulations and diffusion coefficient determination by stochastic and deterministic optimization methods.

    Science.gov (United States)

    Kucza, Witold

    2013-07-25

    Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.

  12. Parameters Optimization and Application to Glutamate Fermentation Model Using SVM

    OpenAIRE

    Zhang, Xiangsheng; Pan, Feng

    2015-01-01

    Aimed at the parameters optimization in support vector machine (SVM) for glutamate fermentation modelling, a new method is developed. It optimizes the SVM parameters via an improved particle swarm optimization (IPSO) algorithm which has better global searching ability. The algorithm includes detecting and handling the local convergence and exhibits strong ability to avoid being trapped in local minima. The material step of the method was shown. Simulation experiments demonstrate the effective...

  13. Parameters Optimization and Application to Glutamate Fermentation Model Using SVM

    Directory of Open Access Journals (Sweden)

    Xiangsheng Zhang

    2015-01-01

    Full Text Available Aimed at the parameters optimization in support vector machine (SVM for glutamate fermentation modelling, a new method is developed. It optimizes the SVM parameters via an improved particle swarm optimization (IPSO algorithm which has better global searching ability. The algorithm includes detecting and handling the local convergence and exhibits strong ability to avoid being trapped in local minima. The material step of the method was shown. Simulation experiments demonstrate the effectiveness of the proposed algorithm.

  14. Augmenting Parametric Optimal Ascent Trajectory Modeling with Graph Theory

    Science.gov (United States)

    Dees, Patrick D.; Zwack, Matthew R.; Edwards, Stephen; Steffens, Michael

    2016-01-01

    into Conceptual and Pre-Conceptual design, knowledge of the effects originating from changes to the vehicle must be calculated. In order to do this, a model capable of quantitatively describing any vehicle within the entire design space under consideration must be constructed. This model must be based upon analysis of acceptable fidelity, which in this work comes from POST. Design space interrogation can be achieved with surrogate modeling, a parametric, polynomial equation representing a tool. A surrogate model must be informed by data from the tool with enough points to represent the solution space for the chosen number of variables with an acceptable level of error. Therefore, Design Of Experiments (DOE) is used to select points within the design space to maximize information gained on the design space while minimizing number of data points required. To represent a design space with a non-trivial number of variable parameters the number of points required still represent an amount of work which would take an inordinate amount of time via the current paradigm of manual analysis, and so an automated method was developed. The best practices of expert trajectory analysts working within NASA Marshall's Advanced Concepts Office (ACO) were implemented within a tool called multiPOST. These practices include how to use the output data from a previous run of POST to inform the next, determining whether a trajectory solution is feasible from a real-world perspective, and how to handle program execution errors. The tool was then augmented with multiprocessing capability to enable analysis on multiple trajectories simultaneously, allowing throughput to scale with available computational resources. In this update to the previous work the authors discuss issues with the method and solutions.

  15. Determining the optimal monetary policy instrument for Nigeria

    OpenAIRE

    Udom, Solomon I.; Yaaba, Baba N.

    2015-01-01

    It is considered inapt for central banks to adjust reserve money (quantity of money) and interest rate (price of money) at the same time. Thus, necessitates the need for a choice instrument. Enough evidence abounds in microeconomic theory on the undesirability of manipulating both price and quantity simultaneously in a free market structure. The market, in line with the consensus among economists, either controls the price and allows quantity to be determined by market forces, or influence qu...

  16. An optimization strategy for a biokinetic model of inhaled radionuclides

    International Nuclear Information System (INIS)

    Shyr, L.J.; Griffith, W.C.; Boecker, B.B.

    1991-01-01

    Models for material disposition and dosimetry involve predictions of the biokinetics of the material among compartments representing organs and tissues in the body. Because of a lack of human data for most toxicants, many of the basic data are derived by modeling the results obtained from studies using laboratory animals. Such a biomathematical model is usually developed by adjusting the model parameters to make the model predictions match the measured retention and excretion data visually. The fitting process can be very time-consuming for a complicated model, and visual model selections may be subjective and easily biased by the scale or the data used. Due to the development of computerized optimization methods, manual fitting could benefit from an automated process. However, for a complicated model, an automated process without an optimization strategy will not be efficient, and may not produce fruitful results. In this paper, procedures for, and implementation of, an optimization strategy for a complicated mathematical model is demonstrated by optimizing a biokinetic model for 144Ce in fused aluminosilicate particles inhaled by beagle dogs. The optimized results using SimuSolv were compared to manual fitting results obtained previously using the model simulation software GASP. Also, statistical criteria provided by SimuSolv, such as likelihood function values, were used to help or verify visual model selections

  17. Qualitative and Quantitative Integrated Modeling for Stochastic Simulation and Optimization

    Directory of Open Access Journals (Sweden)

    Xuefeng Yan

    2013-01-01

    Full Text Available The simulation and optimization of an actual physics system are usually constructed based on the stochastic models, which have both qualitative and quantitative characteristics inherently. Most modeling specifications and frameworks find it difficult to describe the qualitative model directly. In order to deal with the expert knowledge, uncertain reasoning, and other qualitative information, a qualitative and quantitative combined modeling specification was proposed based on a hierarchical model structure framework. The new modeling approach is based on a hierarchical model structure which includes the meta-meta model, the meta-model and the high-level model. A description logic system is defined for formal definition and verification of the new modeling specification. A stochastic defense simulation was developed to illustrate how to model the system and optimize the result. The result shows that the proposed method can describe the complex system more comprehensively, and the survival probability of the target is higher by introducing qualitative models into quantitative simulation.

  18. Optimal treatment interruptions control of TB transmission model

    Science.gov (United States)

    Nainggolan, Jonner; Suparwati, Titik; Kawuwung, Westy B.

    2018-03-01

    A tuberculosis model which incorporates treatment interruptions of infectives is established. Optimal control of individuals infected with active TB is given in the model. It is obtained that the control reproduction numbers is smaller than the reproduction number, this means treatment controls could optimize the decrease in the spread of active TB. For this model, controls on treatment of infection individuals to reduce the actively infected individual populations, by application the Pontryagins Maximum Principle for optimal control. The result further emphasized the importance of controlling disease relapse in reducing the number of actively infected and treatment interruptions individuals with tuberculosis.

  19. Optimization, Monotonicity and the Determination of Nash Equilibria — An Algorithmic Analysis

    Science.gov (United States)

    Lozovanu, D.; Pickl, S. W.; Weber, G.-W.

    2004-08-01

    This paper is concerned with the optimization of a nonlinear time-discrete model exploiting the special structure of the underlying cost game and the property of inverse matrices. The costs are interlinked by a system of linear inequalities. It is shown that, if the players cooperate, i.e., minimize the sum of all the costs, they achieve a Nash equilibrium. In order to determine Nash equilibria, the simplex method can be applied with respect to the dual problem. An introduction into the TEM model and its relationship to an economic Joint Implementation program is given. The equivalence problem is presented. The construction of the emission cost game and the allocation problem is explained. The assumption of inverse monotony for the matrices leads to a new result in the area of such allocation problems. A generalization of such problems is presented.

  20. Pavement maintenance optimization model using Markov Decision Processes

    Science.gov (United States)

    Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.

    2017-09-01

    This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.

  1. Optlang: An algebraic modeling language for mathematical optimization

    DEFF Research Database (Denmark)

    Jensen, Kristian; Cardoso, Joao; Sonnenschein, Nikolaus

    2016-01-01

    Optlang is a Python package implementing a modeling language for solving mathematical optimization problems, i.e., maximizing or minimizing an objective function over a set of variables subject to a number of constraints. It provides a common native Python interface to a series of optimization...

  2. Correlations in state space can cause sub-optimal adaptation of optimal feedback control models.

    Science.gov (United States)

    Aprasoff, Jonathan; Donchin, Opher

    2012-04-01

    Control of our movements is apparently facilitated by an adaptive internal model in the cerebellum. It was long thought that this internal model implemented an adaptive inverse model and generated motor commands, but recently many reject that idea in favor of a forward model hypothesis. In theory, the forward model predicts upcoming state during reaching movements so the motor cortex can generate appropriate motor commands. Recent computational models of this process rely on the optimal feedback control (OFC) framework of control theory. OFC is a powerful tool for describing motor control, it does not describe adaptation. Some assume that adaptation of the forward model alone could explain motor adaptation, but this is widely understood to be overly simplistic. However, an adaptive optimal controller is difficult to implement. A reasonable alternative is to allow forward model adaptation to 're-tune' the controller. Our simulations show that, as expected, forward model adaptation alone does not produce optimal trajectories during reaching movements perturbed by force fields. However, they also show that re-optimizing the controller from the forward model can be sub-optimal. This is because, in a system with state correlations or redundancies, accurate prediction requires different information than optimal control. We find that adding noise to the movements that matches noise found in human data is enough to overcome this problem. However, since the state space for control of real movements is far more complex than in our simple simulations, the effects of correlations on re-adaptation of the controller from the forward model cannot be overlooked.

  3. Modified Chaos Particle Swarm Optimization-Based Optimized Operation Model for Stand-Alone CCHP Microgrid

    Directory of Open Access Journals (Sweden)

    Fei Wang

    2017-07-01

    Full Text Available The optimized dispatch of different distributed generations (DGs in stand-alone microgrid (MG is of great significance to the operation’s reliability and economy, especially for energy crisis and environmental pollution. Based on controllable load (CL and combined cooling-heating-power (CCHP model of micro-gas turbine (MT, a multi-objective optimization model with relevant constraints to optimize the generation cost, load cut compensation and environmental benefit is proposed in this paper. The MG studied in this paper consists of photovoltaic (PV, wind turbine (WT, fuel cell (FC, diesel engine (DE, MT and energy storage (ES. Four typical scenarios were designed according to different day types (work day or weekend and weather conditions (sunny or rainy in view of the uncertainty of renewable energy in variable situations and load fluctuation. A modified dispatch strategy for CCHP is presented to further improve the operation economy without reducing the consumers’ comfort feeling. Chaotic optimization and elite retention strategy are introduced into basic particle swarm optimization (PSO to propose modified chaos particle swarm optimization (MCPSO whose search capability and convergence speed are improved greatly. Simulation results validate the correctness of the proposed model and the effectiveness of MCPSO algorithm in the optimized operation application of stand-alone MG.

  4. A new hybrid model optimized by an intelligent optimization algorithm for wind speed forecasting

    International Nuclear Information System (INIS)

    Su, Zhongyue; Wang, Jianzhou; Lu, Haiyan; Zhao, Ge

    2014-01-01

    Highlights: • A new hybrid model is developed for wind speed forecasting. • The model is based on the Kalman filter and the ARIMA. • An intelligent optimization method is employed in the hybrid model. • The new hybrid model has good performance in western China. - Abstract: Forecasting the wind speed is indispensable in wind-related engineering studies and is important in the management of wind farms. As a technique essential for the future of clean energy systems, reducing the forecasting errors related to wind speed has always been an important research subject. In this paper, an optimized hybrid method based on the Autoregressive Integrated Moving Average (ARIMA) and Kalman filter is proposed to forecast the daily mean wind speed in western China. This approach employs Particle Swarm Optimization (PSO) as an intelligent optimization algorithm to optimize the parameters of the ARIMA model, which develops a hybrid model that is best adapted to the data set, increasing the fitting accuracy and avoiding over-fitting. The proposed method is subsequently examined on the wind farms of western China, where the proposed hybrid model is shown to perform effectively and steadily

  5. Determining Optimal Allocation of Naval Obstetric Resources with Linear Programming

    Science.gov (United States)

    2013-12-01

    22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave...effective manner. Additionally, the model can accommodate changes in the inputs and constraints and can be used to provide support for similar...Pendleton (NHCP), Naval Hospital Camp Lejeune (NHCL), labor delivery and recovery ( LDR ). 15. NUMBER OF PAGES 69 16. PRICE CODE 17. SECURITY

  6. Multiple Surrogate Modeling for Wire-Wrapped Fuel Assembly Optimization

    International Nuclear Information System (INIS)

    Raza, Wasim; Kim, Kwang-Yong

    2007-01-01

    In this work, shape optimization of seven pin wire wrapped fuel assembly has been carried out in conjunction with RANS analysis in order to evaluate the performances of surrogate models. Previously, Ahmad and Kim performed the flow and heat transfer analysis based on the three-dimensional RANS analysis. But numerical optimization has not been applied to the design of wire-wrapped fuel assembly, yet. Surrogate models are being widely used in multidisciplinary optimization. Queipo et al. reviewed various surrogates based models used in aerospace applications. Goel et al. developed weighted average surrogate model based on response surface approximation (RSA), radial basis neural network (RBNN) and Krigging (KRG) models. In addition to the three basic models, RSA, RBNN and KRG, the multiple surrogate model, PBA also has been employed. Two geometric design variables and a multi-objective function with a weighting factor have been considered for this problem

  7. Determination of the optimal tolerance for MLC positioning in sliding window and VMAT techniques

    International Nuclear Information System (INIS)

    Hernandez, V.; Abella, R.; Calvo, J. F.; Jurado-Bruggemann, D.; Sancho, I.; Carrasco, P.

    2015-01-01

    Purpose: Several authors have recommended a 2 mm tolerance for multileaf collimator (MLC) positioning in sliding window treatments. In volumetric modulated arc therapy (VMAT) treatments, however, the optimal tolerance for MLC positioning remains unknown. In this paper, the authors present the results of a multicenter study to determine the optimal tolerance for both techniques. Methods: The procedure used is based on dynalog file analysis. The study was carried out using seven Varian linear accelerators from five different centers. Dynalogs were collected from over 100 000 clinical treatments and in-house software was used to compute the number of tolerance faults as a function of the user-defined tolerance. Thus, the optimal value for this tolerance, defined as the lowest achievable value, was investigated. Results: Dynalog files accurately predict the number of tolerance faults as a function of the tolerance value, especially for low fault incidences. All MLCs behaved similarly and the Millennium120 and the HD120 models yielded comparable results. In sliding window techniques, the number of beams with an incidence of hold-offs >1% rapidly decreases for a tolerance of 1.5 mm. In VMAT techniques, the number of tolerance faults sharply drops for tolerances around 2 mm. For a tolerance of 2.5 mm, less than 0.1% of the VMAT arcs presented tolerance faults. Conclusions: Dynalog analysis provides a feasible method for investigating the optimal tolerance for MLC positioning in dynamic fields. In sliding window treatments, the tolerance of 2 mm was found to be adequate, although it can be reduced to 1.5 mm. In VMAT treatments, the typically used 5 mm tolerance is excessively high. Instead, a tolerance of 2.5 mm is recommended

  8. Particle swarm optimization of a neural network model in a ...

    Indian Academy of Sciences (India)

    . Since tool life is critically affected by the tool wear, accurate prediction of this wear ... In their work, they established an improvement in the quality ... objective optimization of hard turning using neural network modelling and swarm intelligence ...

  9. Analysis and optimization of a camber morphing wing model

    Directory of Open Access Journals (Sweden)

    Bing Li

    2016-09-01

    Full Text Available This article proposes a camber morphing wing model that can continuously change its camber. A mathematical model is proposed and a kinematic simulation is performed to verify the wing’s ability to change camber. An aerodynamic model is used to test its aerodynamic characteristics. Some important aerodynamic analyses are performed. A comparative analysis is conducted to explore the relationships between aerodynamic parameters, the rotation angle of the trailing edge, and the angle of attack. An improved artificial fish swarm optimization algorithm is proposed, referred to as the weighted adaptive artificial fish-swarm with embedded Hooke–Jeeves search method. Some comparison tests are used to test the performance of the improved optimization algorithm. Finally, the proposed optimization algorithm is used to optimize the proposed camber morphing wing model.

  10. Optimization and evaluation of probabilistic-logic sequence models

    DEFF Research Database (Denmark)

    Christiansen, Henning; Lassen, Ole Torp

    to, in principle, Turing complete languages. In general, such models are computationally far to complex for direct use, so optimization by pruning and approximation are needed. % The first steps are made towards a methodology for optimizing such models by approximations using auxiliary models......Analysis of biological sequence data demands more and more sophisticated and fine-grained models, but these in turn introduce hard computational problems. A class of probabilistic-logic models is considered, which increases the expressibility from HMM's and SCFG's regular and context-free languages...

  11. Adaptive surrogate model based multiobjective optimization for coastal aquifer management

    Science.gov (United States)

    Song, Jian; Yang, Yun; Wu, Jianfeng; Wu, Jichun; Sun, Xiaomin; Lin, Jin

    2018-06-01

    In this study, a novel surrogate model assisted multiobjective memetic algorithm (SMOMA) is developed for optimal pumping strategies of large-scale coastal groundwater problems. The proposed SMOMA integrates an efficient data-driven surrogate model with an improved non-dominated sorted genetic algorithm-II (NSGAII) that employs a local search operator to accelerate its convergence in optimization. The surrogate model based on Kernel Extreme Learning Machine (KELM) is developed and evaluated as an approximate simulator to generate the patterns of regional groundwater flow and salinity levels in coastal aquifers for reducing huge computational burden. The KELM model is adaptively trained during evolutionary search to satisfy desired fidelity level of surrogate so that it inhibits error accumulation of forecasting and results in correctly converging to true Pareto-optimal front. The proposed methodology is then applied to a large-scale coastal aquifer management in Baldwin County, Alabama. Objectives of minimizing the saltwater mass increase and maximizing the total pumping rate in the coastal aquifers are considered. The optimal solutions achieved by the proposed adaptive surrogate model are compared against those solutions obtained from one-shot surrogate model and original simulation model. The adaptive surrogate model does not only improve the prediction accuracy of Pareto-optimal solutions compared with those by the one-shot surrogate model, but also maintains the equivalent quality of Pareto-optimal solutions compared with those by NSGAII coupled with original simulation model, while retaining the advantage of surrogate models in reducing computational burden up to 94% of time-saving. This study shows that the proposed methodology is a computationally efficient and promising tool for multiobjective optimizations of coastal aquifer managements.

  12. Reduced order modeling in topology optimization of vibroacoustic problems

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Brunskog, Jonas

    2017-01-01

    complex 3D parts. The optimization process can therefore become highly time consuming due to the need to solve a large system of equations at each iteration. Projection-based parametric Model Order Reduction (pMOR) methods have successfully been applied for reducing the computational cost of material......There is an interest in introducing topology optimization techniques in the design process of structural-acoustic systems. In topology optimization, the design space must be finely meshed in order to obtain an accurate design, which results in large numbers of degrees of freedom when designing...... or size optimization in large vibroacoustic models; however, new challenges are encountered when dealing with topology optimization. Since a design parameter per element is considered, the total number of design variables becomes very large; this poses a challenge to most existing pMOR techniques, which...

  13. Modeling and optimization of dough recipe for breadsticks

    Science.gov (United States)

    Krivosheev, A. Yu; Ponomareva, E. I.; Zhuravlev, A. A.; Lukina, S. I.; Alekhina, N. N.

    2018-05-01

    During the work, the authors studied the combined effect of non-traditional raw materials on indicators of quality breadsticks, mathematical methods of experiment planning were applied. The main factors chosen were the dosages of flaxseed flour and grape seed oil. The output parameters were the swelling factor of the products and their strength. Optimization of the formulation composition of the dough for bread sticks was carried out by experimental- statistical methods. As a result of the experiment, mathematical models were constructed in the form of regression equations, adequately describing the process of studies. The statistical processing of the experimental data was carried out by the criteria of Student, Cochran and Fisher (with a confidence probability of 0.95). A mathematical interpretation of the regression equations was given. Optimization of the formulation of the dough for bread sticks was carried out by the method of uncertain Lagrange multipliers. The rational values of the factors were determined: the dosage of flaxseed flour - 14.22% and grape seed oil - 7.8%, ensuring the production of products with the best combination of swelling ratio and strength. On the basis of the data obtained, a recipe and a method for the production of breadsticks "Idea" were proposed (TU (Russian Technical Specifications) 9117-443-02068106-2017).

  14. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  15. Optimization of the indirect at neutron activation technique for the determination of boron in aqueous solutions

    International Nuclear Information System (INIS)

    Luz, L.C.Q.P. da.

    1984-01-01

    The purpose of this work was the development of an instrumental method for the optimization of the indirect neutron activation analysis of boron in aqueous solutions. The optimization took into account the analytical parameters under laboratory conditions: activation carried out with a 241 Am/Be neutron source and detection of the activity induced in vanadium with two NaI(Tl) gamma spectrometers. A calibration curve was thus obtained for a concentration range of 0 to 5000 ppm B. Later on, experimental models were built in order to study the feasibility of automation. The analysis of boron was finally performed, under the previously established conditions, with an automated system comprising the operations of transport, irradiation and counting. An improvement in the quality of the analysis was observed, with boron concentrations as low as 5 ppm being determined with a precision level better than 0.4%. The experimental model features all basic design elements for an automated device for the analysis of boron in agueous solutions wherever this is required, as in the operation of nuclear reactors. (Author) [pt

  16. RF building block modeling: optimization and synthesis

    NARCIS (Netherlands)

    Cheng, W.

    2012-01-01

    For circuit designers it is desirable to have relatively simple RF circuit models that do give decent estimation accuracy and provide sufficient understanding of circuits. Chapter 2 in this thesis shows a general weak nonlinearity model that meets these demands. Using a method that is related to

  17. Optimal blood glucose level control using dynamic programming based on minimal Bergman model

    Science.gov (United States)

    Rettian Anggita Sari, Maria; Hartono

    2018-03-01

    The purpose of this article is to simulate the glucose dynamic and the insulin kinetic of diabetic patient. The model used in this research is a non-linear Minimal Bergman model. Optimal control theory is then applied to formulate the problem in order to determine the optimal dose of insulin in the treatment of diabetes mellitus such that the glucose level is in the normal range for some specific time range. The optimization problem is solved using dynamic programming. The result shows that dynamic programming is quite reliable to represent the interaction between glucose and insulin levels in diabetes mellitus patient.

  18. Determination of an Optimal Control Strategy for a Generic Surface Vehicle

    Science.gov (United States)

    2014-06-18

    TERMS Autonomous Vehicles Boundary Value Problem Dynamic Programming Surface Vehicles Optimal Control Path Planning 16...to follow prescribed motion trajectories. In particular, for autonomous vehicles , this motion trajectory is given by the determination of the

  19. Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization

    Science.gov (United States)

    Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane

    2003-01-01

    The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.

  20. Determining the Optimal Design for a New ADR Mechanical Support

    Science.gov (United States)

    Waldvogel, Kelly; Stacey, Gordon; Nikola, Thomas; Parshley, Stephen

    2018-01-01

    ZEUS-2 is a grating spectrometer that is used to observe emission lines in submillimeter wavelengths. It is capable of detecting redshifted fine structure lines of galaxies over a wide redshift range. ZEUS-2 can observe carbon, nitrogen, and oxygen lines, which will in turn allow for modeling of optically thick molecular clouds, provide information about star temperatures, and help gain insight about the interstellar medium and gases from which stars form. The detections collected by ZEUS-2 can provide a glimpse into star formation in the early universe and improve the current understanding of the star formation process.ZEUS-2 utilizes an Adiabatic Demagnetization Refrigerator (ADR) to cool its detectors to around 100 mK. Copper rods connect the salt pills within the ADR and the mechanical supports. These supports are comprised of three main pieces: a base member, an inner member, and a guard member. On two separate mechanical supports, the Kevlar strands have broken. This led to thermal contact between the three members, preventing the detector from reaching its final operating temperature. It is clear that a replacement mechanical support system is necessary for operation.

  1. Vector-model-supported approach in prostate plan optimization

    International Nuclear Information System (INIS)

    Liu, Eva Sau Fan; Wu, Vincent Wing Cheung; Harris, Benjamin; Lehman, Margot; Pryor, David; Chan, Lawrence Wing Chi

    2017-01-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  2. Vector-model-supported approach in prostate plan optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Eva Sau Fan [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Wu, Vincent Wing Cheung [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Harris, Benjamin [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Lehman, Margot; Pryor, David [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); School of Medicine, University of Queensland (Australia); Chan, Lawrence Wing Chi, E-mail: wing.chi.chan@polyu.edu.hk [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong)

    2017-07-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  3. A model for an economically optimal replacement of a breeder flock

    NARCIS (Netherlands)

    Yassin, H.; Velthuis, A.G.J.; Giesen, G.W.J.; Oude Lansink, A.G.J.M.

    2012-01-01

    A deterministic model is developed to support the tactical and operational replacement decisions at broiler breeder farms. The marginal net revenue approach is applied to determine the optimal replacement age of a flock. The objective function of the model maximizes the annual gross margin over the

  4. Geometry and Topology Optimization of Statically Determinate Beams under Fixed and Most Unfavorably Distributed Load

    Directory of Open Access Journals (Sweden)

    Agata Kozikowska

    Full Text Available Abstract The paper concerns topology and geometry optimization of statically determinate beams with an arbitrary number of pin supports. The beams are simultaneously exposed to uniform dead load and arbitrarily distributed live load and optimized for the absolute maximum bending moment. First, all the beams with fixed topology are subjected to geometrical optimization by genetic algorithm. Strict mathematical formulas for calculation of optimal geometrical parameters are found for all topologies and any ratio of dead to live load. Then beams with the same minimal values of the objective function and different topologies are classified into groups called topological classes. The detailed characteristics of these classes are described.

  5. Optimal inventory management and order book modeling

    KAUST Repository

    Baradel, Nicolas; Bouchard, Bruno; Evangelista, David; Mounjid, Othmane

    2018-01-01

    We model the behavior of three agent classes acting dynamically in a limit order book of a financial asset. Namely, we consider market makers (MM), high-frequency trading (HFT) firms, and institutional brokers (IB). Given a prior dynamic

  6. Optimal parametric modelling of measured short waves

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    the importance of selecting a suitable sampling interval for better estimates of parametric modelling and also for better statistical representation. Implementation of the above algorithms in a structural monitoring system has the potential advantage of storing...

  7. Optimization models in a transition economy

    CERN Document Server

    Sergienko, Ivan V; Koshlai, Ludmilla

    2014-01-01

    This book opens new avenues in understanding mathematical models within the context of a  transition economy. The exposition lays out the methods for combining different mathematical structures and tools to effectively build the next model that will accurately reflect real world economic processes. Mathematical modeling of weather phenomena allows us to forecast certain essential weather parameters without any possibility of changing them. By contrast, modeling of transition economies gives us the freedom to not only predict changes in important indexes of all types of economies, but also to influence them more effectively in the desired direction. Simply put: any economy, including a transitional one, can be controlled. This book is useful to anyone who wants to increase profits within their business, or improve the quality of their family life and the economic area they live in. It is beneficial for undergraduate and graduate students specializing in the fields of Economic Informatics, Economic Cybernetic...

  8. Optimization of experimental human leukemia models (review

    Directory of Open Access Journals (Sweden)

    D. D. Pankov

    2012-01-01

    Full Text Available Actual problem of assessing immunotherapy prospects including antigenpecific cell therapy using animal models was covered in this review.Describe the various groups of currently existing animal models and methods of their creating – from different immunodeficient mice to severalvariants of tumor cells engraftment in them. The review addresses the possibility of tumor stem cells studying using mouse models for the leukemia treatment with adoptive cell therapy including WT1. Also issues of human leukemia cells migration and proliferation in a mice withdifferent immunodeficiency degree are discussed. To assess the potential immunotherapy efficacy comparison of immunodeficient mouse model with clinical situation in oncology patients after chemotherapy is proposed.

  9. Optimality models in the age of experimental evolution and genomics.

    Science.gov (United States)

    Bull, J J; Wang, I-N

    2010-09-01

    Optimality models have been used to predict evolution of many properties of organisms. They typically neglect genetic details, whether by necessity or design. This omission is a common source of criticism, and although this limitation of optimality is widely acknowledged, it has mostly been defended rather than evaluated for its impact. Experimental adaptation of model organisms provides a new arena for testing optimality models and for simultaneously integrating genetics. First, an experimental context with a well-researched organism allows dissection of the evolutionary process to identify causes of model failure--whether the model is wrong about genetics or selection. Second, optimality models provide a meaningful context for the process and mechanics of evolution, and thus may be used to elicit realistic genetic bases of adaptation--an especially useful augmentation to well-researched genetic systems. A few studies of microbes have begun to pioneer this new direction. Incompatibility between the assumed and actual genetics has been demonstrated to be the cause of model failure in some cases. More interestingly, evolution at the phenotypic level has sometimes matched prediction even though the adaptive mutations defy mechanisms established by decades of classic genetic studies. Integration of experimental evolutionary tests with genetics heralds a new wave for optimality models and their extensions that does not merely emphasize the forces driving evolution.

  10. MENENTUKAN PORTOFOLIO OPTIMAL PADA PASAR SAHAM YANG BERGERAK DENGAN MODEL GERAK BROWN GEOMETRI MULTIDIMENSI

    Directory of Open Access Journals (Sweden)

    RISKA YUNITA

    2015-06-01

    Full Text Available Model of stock price movements that follow stochastic process can be formulated in Stochastic Diferential Equation (SDE. The exact solution of SDE model is called Geometric Brownian Motion (GBM model. Determination the optimal portfolio of three asset that follows Multidimensional GBM model is to be carried out in this research.Multidimensional GBM model represents stock price in the future is affected by three parameter, there are expectation of stock return, risk stock, and correlation between stock return. Therefore, theory of portfolio Markowitz is used on formation of optimal portfolio. Portfolio Markowitz formulates three of same parameter that is calculated on Multidimensional GBM model. The result of this research are optimal portfolio reaches with the proportion of fund are 39,38% for stock BBCA, 59,82% for stock ICBP, and 0,80% for stock INTP. This proportion of fund represents value of parameters that is calculated on modelling stock price.

  11. Optimal Pricing and Advertising Policies for New Product Oligopoly Models

    OpenAIRE

    Gerald L. Thompson; Jinn-Tsair Teng

    1984-01-01

    In this paper our previous work on monopoly and oligopoly new product models is extended by the addition of pricing as well as advertising control variables. These models contain Bass's demand growth model, and the Vidale-Wolfe and Ozga advertising models, as well as the production learning curve model and an exponential demand function. The problem of characterizing an optimal pricing and advertising policy over time is an important question in the field of marketing as well as in the areas ...

  12. Determination of Optimal Double Sampling Plan using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Sampath Sundaram

    2012-03-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Designing double sampling plan requires identification of sample sizes and acceptance numbers. In this paper a genetic algorithm has been designed for the selection of optimal acceptance numbers and sample sizes for the specified producer’s risk and consumer’s risk. Implementation of the algorithm has been illustrated numerically for different choices of quantities involved in a double sampling plan   

  1. Model-Based Individualized Treatment of Chemotherapeutics: Bayesian Population Modeling and Dose Optimization

    Science.gov (United States)

    Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami

    2015-01-01

    6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP’s widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient’s ability to metabolize the drug instead of the traditional standard-dose-for-all approach. PMID:26226448

  2. Modeling the minimum enzymatic requirements for optimal cellulose conversion

    International Nuclear Information System (INIS)

    Den Haan, R; Van Zyl, W H; Van Zyl, J M; Harms, T M

    2013-01-01

    Hydrolysis of cellulose is achieved by the synergistic action of endoglucanases, exoglucanases and β-glucosidases. Most cellulolytic microorganisms produce a varied array of these enzymes and the relative roles of the components are not easily defined or quantified. In this study we have used partially purified cellulases produced heterologously in the yeast Saccharomyces cerevisiae to increase our understanding of the roles of some of these components. CBH1 (Cel7), CBH2 (Cel6) and EG2 (Cel5) were separately produced in recombinant yeast strains, allowing their isolation free of any contaminating cellulolytic activity. Binary and ternary mixtures of the enzymes at loadings ranging between 3 and 100 mg g −1 Avicel allowed us to illustrate the relative roles of the enzymes and their levels of synergy. A mathematical model was created to simulate the interactions of these enzymes on crystalline cellulose, under both isolated and synergistic conditions. Laboratory results from the various mixtures at a range of loadings of recombinant enzymes allowed refinement of the mathematical model. The model can further be used to predict the optimal synergistic mixes of the enzymes. This information can subsequently be applied to help to determine the minimum protein requirement for complete hydrolysis of cellulose. Such knowledge will be greatly informative for the design of better enzymatic cocktails or processing organisms for the conversion of cellulosic biomass to commodity products. (letter)

  3. Airfoil Shape Optimization based on Surrogate Model

    Science.gov (United States)

    Mukesh, R.; Lingadurai, K.; Selvakumar, U.

    2018-02-01

    Engineering design problems always require enormous amount of real-time experiments and computational simulations in order to assess and ensure the design objectives of the problems subject to various constraints. In most of the cases, the computational resources and time required per simulation are large. In certain cases like sensitivity analysis, design optimisation etc where thousands and millions of simulations have to be carried out, it leads to have a life time of difficulty for designers. Nowadays approximation models, otherwise called as surrogate models (SM), are more widely employed in order to reduce the requirement of computational resources and time in analysing various engineering systems. Various approaches such as Kriging, neural networks, polynomials, Gaussian processes etc are used to construct the approximation models. The primary intention of this work is to employ the k-fold cross validation approach to study and evaluate the influence of various theoretical variogram models on the accuracy of the surrogate model construction. Ordinary Kriging and design of experiments (DOE) approaches are used to construct the SMs by approximating panel and viscous solution algorithms which are primarily used to solve the flow around airfoils and aircraft wings. The method of coupling the SMs with a suitable optimisation scheme to carryout an aerodynamic design optimisation process for airfoil shapes is also discussed.

  4. Aerodynamic Modelling and Optimization of Axial Fans

    DEFF Research Database (Denmark)

    Sørensen, Dan Nørtoft

    A numerically efficient mathematical model for the aerodynamics oflow speed axial fans of the arbitrary vortex flow type has been developed.The model is based on a blade-element principle, whereby therotor is divided into a number of annular streamtubes.For each of these streamtubes relations......-Raphson method, andsolutions converged to machine accuracy are found at small computing costs.The model has been validated against published measurementson various fan configurations,comprising two rotor-only fan stages, a counter-rotatingfan unit and a stator-rotor-stator stage.Comparisons of local...... and integrated propertiesshow that the computed results agree well with the measurements.Integrating a rotor-only version of the aerodynamic modelwith an algorithm for numerical designoptimization, enables the finding of an optimum fan rotor.The angular velocity of the rotor, the hub radius and the spanwise...

  5. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    Science.gov (United States)

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of

  6. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  7. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  8. Optimal designs for linear mixture models

    NARCIS (Netherlands)

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.

    1975-01-01

    In a recent paper Snee and Marquardt [8] considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of this

  9. Optimal designs for linear mixture models

    NARCIS (Netherlands)

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.

    1975-01-01

    In a recent paper Snee and Marquardt (1974) considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of

  10. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    Directory of Open Access Journals (Sweden)

    Mark N Read

    2016-09-01

    Full Text Available The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto

  11. Inverse modeling of FIB milling by dose profile optimization

    International Nuclear Information System (INIS)

    Lindsey, S.; Waid, S.; Hobler, G.; Wanzenböck, H.D.; Bertagnolli, E.

    2014-01-01

    FIB technologies possess a unique ability to form topographies that are difficult or impossible to generate with binary etching through typical photo-lithography. The ability to arbitrarily vary the spatial dose distribution and therefore the amount of milling opens possibilities for the production of a wide range of functional structures with applications in biology, chemistry, and optics. However in practice, the realization of these goals is made difficult by the angular dependence of the sputtering yield and redeposition effects that vary as the topography evolves. An inverse modeling algorithm that optimizes dose profiles, defined as the superposition of time invariant pixel dose profiles (determined from the beam parameters and pixel dwell times), is presented. The response of the target to a set of pixel dwell times in modeled by numerical continuum simulations utilizing 1st and 2nd order sputtering and redeposition, the resulting surfaces are evaluated with respect to a target topography in an error minimization routine. Two algorithms for the parameterization of pixel dwell times are presented, a direct pixel dwell time method, and an abstracted method that uses a refineable piecewise linear cage function to generate pixel dwell times from a minimal number of parameters. The cage function method demonstrates great flexibility and efficiency as compared to the direct fitting method with performance enhancements exceeding ∼10× as compared to direct fitting for medium to large simulation sets. Furthermore, the refineable nature of the cage function enables solutions to adapt to the desired target function. The optimization algorithm, although working with stationary dose profiles, is demonstrated to be applicable also outside the quasi-static approximation. Experimental data confirms the viability of the solutions for 5 × 7 μm deep lens like structures defined by 90 pixel dwell times

  12. Optimizing Biorefinery Design and Operations via Linear Programming Models

    Energy Technology Data Exchange (ETDEWEB)

    Talmadge, Michael; Batan, Liaw; Lamers, Patrick; Hartley, Damon; Biddy, Mary; Tao, Ling; Tan, Eric

    2017-03-28

    The ability to assess and optimize economics of biomass resource utilization for the production of fuels, chemicals and power is essential for the ultimate success of a bioenergy industry. The team of authors, consisting of members from the National Renewable Energy Laboratory (NREL) and the Idaho National Laboratory (INL), has developed simple biorefinery linear programming (LP) models to enable the optimization of theoretical or existing biorefineries. The goal of this analysis is to demonstrate how such models can benefit the developing biorefining industry. It focuses on a theoretical multi-pathway, thermochemical biorefinery configuration and demonstrates how the biorefinery can use LP models for operations planning and optimization in comparable ways to the petroleum refining industry. Using LP modeling tools developed under U.S. Department of Energy's Bioenergy Technologies Office (DOE-BETO) funded efforts, the authors investigate optimization challenges for the theoretical biorefineries such as (1) optimal feedstock slate based on available biomass and prices, (2) breakeven price analysis for available feedstocks, (3) impact analysis for changes in feedstock costs and product prices, (4) optimal biorefinery operations during unit shutdowns / turnarounds, and (5) incentives for increased processing capacity. These biorefinery examples are comparable to crude oil purchasing and operational optimization studies that petroleum refiners perform routinely using LPs and other optimization models. It is important to note that the analyses presented in this article are strictly theoretical and they are not based on current energy market prices. The pricing structure assigned for this demonstrative analysis is consistent with $4 per gallon gasoline, which clearly assumes an economic environment that would favor the construction and operation of biorefineries. The analysis approach and examples provide valuable insights into the usefulness of analysis tools for

  13. Modeling, Optimization & Control of Hydraulic Networks

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat

    2014-01-01

    . The nonlinear network model is derived based on the circuit theory. A suitable projection is used to reduce the state vector and to express the model in standard state-space form. Then, the controllability of nonlinear nonaffine hydraulic networks is studied. The Lie algebra-based controllability matrix is used......Water supply systems consist of a number of pumping stations, which deliver water to the customers via pipeline networks and elevated reservoirs. A huge amount of drinking water is lost before it reaches to end-users due to the leakage in pipe networks. A cost effective solution to reduce leakage...... in water network is pressure management. By reducing the pressure in the water network, the leakage can be reduced significantly. Also it reduces the amount of energy consumption in water networks. The primary purpose of this work is to develop control algorithms for pressure control in water supply...

  14. Optimal maintenance policies in incomplete repair models

    International Nuclear Information System (INIS)

    Kahle, Waltraud

    2007-01-01

    We consider an incomplete repair model, that is, the impact of repair is not minimal as in the homogeneous Poisson process and not 'as good as new' as in renewal processes but lies between these boundary cases. The repairs are assumed to impact the failure intensity following a virtual age process of the general form proposed by Kijima. In previous works field data from an industrial setting were used to fit several models. In most cases the estimated rate of occurrence of failures was that of an underlying exponential distribution of the time between failures. In this paper, it is shown that there exist maintenance schedules under which the failure behavior of the failure-repair process becomes a homogeneous Poisson process

  15. Applicability Of Resources Optimization Model For Mitigating

    African Journals Online (AJOL)

    Dr A.B.Ahmed

    previous work. The entire model can be summarized as algorithm below. F u ll L en g th. Research. Article. 1 .... performance metric used is the total sum of utilities of all the peers in the system at .... Hua, J. S., Huang, D. C. Yen, S. M. and Chena, C. W. (2012) “A dynamic ... Workshop on Quality of Service: 174-192. Yahaya ...

  16. Fuzzy optimization model for land use change

    OpenAIRE

    L. Jahanshahloo; E. Haghi

    2014-01-01

    There are some important questions in Land use change literature, for instance How much land to allocate to each of a number of land use type in order to maximization of (household or individual) rent -paying ability, minimization of environmental impacts or maximization of population income. In this paper, we want to investigate them and propose mathematical models to find an answer for these questions. Since Most of the parameters in this process are linguistics and fuzzy logic is a powerfu...

  17. Quantifying Distributional Model Risk via Optimal Transport

    OpenAIRE

    Blanchet, Jose; Murthy, Karthyek R. A.

    2016-01-01

    This paper deals with the problem of quantifying the impact of model misspecification when computing general expected values of interest. The methodology that we propose is applicable in great generality, in particular, we provide examples involving path dependent expectations of stochastic processes. Our approach consists in computing bounds for the expectation of interest regardless of the probability measure used, as long as the measure lies within a prescribed tolerance measured in terms ...

  18. Optimering af model for spredning af luftforurening

    DEFF Research Database (Denmark)

    Pedersen, Jens Christian

    2008-01-01

    De nuværende luftforureningsmodeller har problemer med at bevare massen af diverse kemiske stoffer og med at der ind i mellem optræder negative værdier. Derfor arbejder specialestuderende Ayoe Buus Hansen på om at forbedre den model DMU bruger til at beskrive transport og spredning af luftforuren...... luftforurening på alle skalaer på den nordlige halvkugle ved at sammenligne tre alternative beregningsmodeller. ...

  19. Surrogate-Based Optimization of Biogeochemical Transport Models

    Science.gov (United States)

    Prieß, Malte; Slawig, Thomas

    2010-09-01

    First approaches towards a surrogate-based optimization method for a one-dimensional marine biogeochemical model of NPZD type are presented. The model, developed by Oschlies and Garcon [1], simulates the distribution of nitrogen, phytoplankton, zooplankton and detritus in a water column and is driven by ocean circulation data. A key issue is to minimize the misfit between the model output and given observational data. Our aim is to reduce the overall optimization cost avoiding expensive function and derivative evaluations by using a surrogate model replacing the high-fidelity model in focus. This in particular becomes important for more complex three-dimensional models. We analyse a coarsening in the discretization of the model equations as one way to create such a surrogate. Here the numerical stability crucially depends upon the discrete stepsize in time and space and the biochemical terms. We show that for given model parameters the level of grid coarsening can be choosen accordingly yielding a stable and satisfactory surrogate. As one example of a surrogate-based optimization method we present results of the Aggressive Space Mapping technique (developed by John W. Bandler [2, 3]) applied to the optimization of this one-dimensional biogeochemical transport model.

  20. Energy balance of forage consumption by phyllophagous insects: optimization model

    Directory of Open Access Journals (Sweden)

    O. V. Tarasova

    2015-06-01

    Full Text Available The model of optimal food consumption by phytophagous insects proposed, in which the metabolic costs are presented in the form of two components – the cost of food utilization and costs for proper metabolism of the individuals. Two measures were introduced – the «price» of food conversion and the «price» of biomass synthesis of individuals to assess the effectiveness of food consumption by caterpillars. The proposed approach to the description of food consumption by insects provides the exact solutions of the equation of energy balance of food consumption and determining the effectiveness of consumption and the risk of death of the individual. Experiments on larvae’s feeding in laboratory conditions were carried out to verify the model. Caterpillars of Aporia crataegi L. (Lepidoptera, Pieridae were the research subjects. Supply­demand balance, calculated value of the environmental price of consumption and efficiency of food consumption for each individual were determined from experimental data. It was found that the fertility of the female does not depend on the weight of food consumed by it, but is linearly dependent on the food consumption efficiency index. The greater the efficiency of food consumption by an individual, the higher its fertility. The data obtained in the course of experiments on the feeding caterpillars Aporia crataegi were compared with the data presented in the works of other authors and counted in the proposed model of consumption. Calculations allowed estimation of the critical value of food conversion price below which the energy balance is negative and the existence of an individual is not possible.

  1. Learning optimal quantum models is NP-hard

    Science.gov (United States)

    Stark, Cyril J.

    2018-02-01

    Physical modeling translates measured data into a physical model. Physical modeling is a major objective in physics and is generally regarded as a creative process. How good are computers at solving this task? Here, we show that in the absence of physical heuristics, the inference of optimal quantum models cannot be computed efficiently (unless P=NP ). This result illuminates rigorous limits to the extent to which computers can be used to further our understanding of nature.

  2. Regional gray matter abnormalities in patients with schizophrenia determined with optimized voxel-based morphometry

    Science.gov (United States)

    Guo, XiaoJuan; Yao, Li; Jin, Zhen; Chen, Kewei

    2006-03-01

    This study examined regional gray matter abnormalities across the whole brain in 19 patients with schizophrenia (12 males and 7 females), comparing with 11 normal volunteers (7 males and 4 females). The customized brain templates were created in order to improve spatial normalization and segmentation. Then automated preprocessing of magnetic resonance imaging (MRI) data was conducted using optimized voxel-based morphometry (VBM). The statistical voxel based analysis was implemented in terms of two-sample t-test model. Compared with normal controls, regional gray matter concentration in patients with schizophrenia was significantly reduced in the bilateral superior temporal gyrus, bilateral middle frontal and inferior frontal gyrus, right insula, precentral and parahippocampal areas, left thalamus and hypothalamus as well as, however, significant increases in gray matter concentration were not observed across the whole brain in the patients. This study confirms and extends some earlier findings on gray matter abnormalities in schizophrenic patients. Previous behavior and fMRI researches on schizophrenia have suggested that cognitive capacity decreased and self-conscious weakened in schizophrenic patients. These regional gray matter abnormalities determined through structural MRI with optimized VBM may be potential anatomic underpinnings of schizophrenia.

  3. Optimization of Operations Resources via Discrete Event Simulation Modeling

    Science.gov (United States)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  4. Space engineering modeling and optimization with case studies

    CERN Document Server

    Pintér, János

    2016-01-01

    This book presents a selection of advanced case studies that cover a substantial range of issues and real-world challenges and applications in space engineering. Vital mathematical modeling, optimization methodologies and numerical solution aspects of each application case study are presented in detail, with discussions of a range of advanced model development and solution techniques and tools. Space engineering challenges are discussed in the following contexts: •Advanced Space Vehicle Design •Computation of Optimal Low Thrust Transfers •Indirect Optimization of Spacecraft Trajectories •Resource-Constrained Scheduling, •Packing Problems in Space •Design of Complex Interplanetary Trajectories •Satellite Constellation Image Acquisition •Re-entry Test Vehicle Configuration Selection •Collision Risk Assessment on Perturbed Orbits •Optimal Robust Design of Hybrid Rocket Engines •Nonlinear Regression Analysis in Space Engineering< •Regression-Based Sensitivity Analysis and Robust Design ...

  5. Model-Based Optimization of Velocity Strategy for Lightweight Electric Racing Cars

    Directory of Open Access Journals (Sweden)

    Mirosław Targosz

    2018-01-01

    Full Text Available The article presents a method for optimizing driving strategies aimed at minimizing energy consumption while driving. The method was developed for the needs of an electric powered racing vehicle built for the purposes of the Shell Eco-marathon (SEM, the most famous and largest race of energy efficient vehicles. Model-based optimization was used to determine the driving strategy. The numerical model was elaborated in Simulink environment, which includes both the electric vehicle model and the environment, i.e., the race track as well as the vehicle environment and the atmospheric conditions. The vehicle model itself includes vehicle dynamic model, numerical model describing issues concerning resistance of rolling tire, resistance of the propulsion system, aerodynamic phenomena, model of the electric motor, and control system. For the purpose of identifying design and functional features of individual subassemblies and components, numerical and stand tests were carried out. The model itself was tested on the research tracks to tune the model and determine the calculation parameters. The evolutionary algorithms, which are available in the MATLAB Global Optimization Toolbox, were used for optimization. In the race conditions, the model was verified during SEM races in Rotterdam where the race vehicle scored the result consistent with the results of simulation calculations. In the following years, the experience gathered by the team gave us the vice Championship in the SEM 2016 in London.

  6. An optimal control model of crop thinning in viticulture

    Directory of Open Access Journals (Sweden)

    Schamel Guenter H.

    2016-01-01

    Full Text Available We develop an economic model of cluster thinning in viticulture to control for grape quantity harvested and grape quality, applying a simple optimal control model with the aim to raise grape quality and related economic profits. The model maximizes vineyard owner profits and allows to discuss two relevant scenarios using a phase diagram analysis: (1 when the initial grape quantity is sufficiently small, thinning grapes will not be optimal and (2 when the initial grape quantity is high enough, it is optimal to thin grapes from the beginning of the relevant planning horizon and to reduce the quantity over time until the stock of grapes arrives at its optimum. Depending on the model's parameters, the “stopping time” for thinning grapes is reached sooner or later. After the stopping time, grape quantity evolves solely according to natural decay. The results relate to observed dynamics in viticulture and for other horticultural crops.

  7. Neutron density optimal control of A-1 reactor analoque model

    International Nuclear Information System (INIS)

    Grof, V.

    1975-01-01

    Two applications are described of the optimal control of a reactor analog model. Both cases consider the control of neutron density. Control loops containing the on-line controlled process, the reactor of the first Czechoslovak nuclear power plant A-1, are simulated on an analog computer. Two versions of the optimal control algorithm are derived using modern control theory (Pontryagin's maximum principle, the calculus of variations, and Kalman's estimation theory), the minimum time performance index, and the quadratic performance index. The results of the optimal control analysis are compared with the A-1 reactor conventional control. (author)

  8. Time dependent optimal switching controls in online selling models

    Energy Technology Data Exchange (ETDEWEB)

    Bradonjic, Milan [Los Alamos National Laboratory; Cohen, Albert [MICHIGAN STATE UNIV

    2010-01-01

    We present a method to incorporate dishonesty in online selling via a stochastic optimal control problem. In our framework, the seller wishes to maximize her average wealth level W at a fixed time T of her choosing. The corresponding Hamilton-Jacobi-Bellmann (HJB) equation is analyzed for a basic case. For more general models, the admissible control set is restricted to a jump process that switches between extreme values. We propose a new approach, where the optimal control problem is reduced to a multivariable optimization problem.

  9. Optimal control of information epidemics modeled as Maki Thompson rumors

    Science.gov (United States)

    Kandhway, Kundan; Kuri, Joy

    2014-12-01

    We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns.

  10. Determining the optimal number of independent components for reproducible transcriptomic data analysis.

    Science.gov (United States)

    Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei

    2017-09-11

    Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.

  11. A system-level cost-of-energy wind farm layout optimization with landowner modeling

    International Nuclear Information System (INIS)

    Chen, Le; MacDonald, Erin

    2014-01-01

    Highlights: • We model the role of landowners in determining the success of wind projects. • A cost-of-energy (COE) model with realistic landowner remittances is developed. • These models are included in a system-level wind farm layout optimization. • Basic verification indicates the optimal COE is in-line with real-world data. • Land plots crucial to a project’s success can be identified with the approach. - Abstract: This work applies an enhanced levelized wind farm cost model, including landowner remittance fees, to determine optimal turbine placements under three landowner participation scenarios and two land-plot shapes. Instead of assuming a continuous piece of land is available for the wind farm construction, as in most layout optimizations, the problem formulation represents landowner participation scenarios as a binary string variable, along with the number of turbines. The cost parameters and model are a combination of models from the National Renewable Energy Laboratory (NREL), Lawrence Berkeley National Laboratory, and Windustry. The system-level cost-of-energy (COE) optimization model is also tested under two land-plot shapes: equally-sized square land plots and unequal rectangle land plots. The optimal COEs results are compared to actual COE data and found to be realistic. The results show that landowner remittances account for approximately 10% of farm operating costs across all cases. Irregular land-plot shapes are easily handled by the model. We find that larger land plots do not necessarily receive higher remittance fees. The model can help site developers identify the most crucial land plots for project success and the optimal positions of turbines, with realistic estimates of costs and profitability

  12. Optimal Resource Management in a Stochastic Schaefer Model

    OpenAIRE

    Richard Hartman

    2008-01-01

    This paper incorporates uncertainty into the growth function of the Schaefer model for the optimal management of a biological resource. There is a critical value for the biological stock, and it is optimal to do no harvesting if the biological stock is below that critical value and to exert whatever harvesting effort is necessary to prevent the stock from rising above that critical value. The introduction of uncertainty increases the critical value of the stock.

  13. Sparse optimization for inverse problems in atmospheric modelling

    Czech Academy of Sciences Publication Activity Database

    Adam, Lukáš; Branda, Martin

    2016-01-01

    Roč. 79, č. 3 (2016), s. 256-266 ISSN 1364-8152 R&D Projects: GA MŠk(CZ) 7F14287 Institutional support: RVO:67985556 Keywords : Inverse modelling * Sparse optimization * Integer optimization * Least squares * European tracer experiment * Free Matlab codes Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.404, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/adam-0457037.pdf

  14. TLM modeling and system identification of optimized antenna structures

    Directory of Open Access Journals (Sweden)

    N. Fichtner

    2008-05-01

    Full Text Available The transmission line matrix (TLM method in conjunction with the genetic algorithm (GA is presented for the bandwidth optimization of a low profile patch antenna. The optimization routine is supplemented by a system identification (SI procedure. By the SI the model parameters of the structure are estimated which is used for a reduction of the total TLM simulation time. The SI utilizes a new stability criterion of the physical poles for the parameter extraction.

  15. Study and optimization of the partial discharges in capacitor model ...

    African Journals Online (AJOL)

    Page 1 ... experiments methodology for the study of such processes, in view of their modeling and optimization. The obtained result is a mathematical model capable to identify the parameters and the interactions between .... 5mn; the next landing is situated in 200 V over the voltage of partial discharges appearance and.

  16. Runtime Optimizations for Tree-Based Machine Learning Models

    NARCIS (Netherlands)

    N. Asadi; J.J.P. Lin (Jimmy); A.P. de Vries (Arjen)

    2014-01-01

    htmlabstractTree-based models have proven to be an effective solution for web ranking as well as other machine learning problems in diverse domains. This paper focuses on optimizing the runtime performance of applying such models to make predictions, specifically using gradient-boosted regression

  17. An Extended Optimal Velocity Model with Consideration of Honk Effect

    International Nuclear Information System (INIS)

    Tang Tieqiao; Li Chuanyao; Huang Haijun; Shang Huayan

    2010-01-01

    Based on the OV (optimal velocity) model, we in this paper present an extended OV model with the consideration of the honk effect. The analytical and numerical results illustrate that the honk effect can improve the velocity and flow of uniform flow but that the increments are relevant to the density. (interdisciplinary physics and related areas of science and technology)

  18. Optimal dimensioning model of water distribution systems | Gomes ...

    African Journals Online (AJOL)

    This study is aimed at developing a pipe-sizing model for a water distribution system. The optimal solution minimises the system's total cost, which comprises the hydraulic network capital cost, plus the capitalised cost of pumping energy. The developed model, called Lenhsnet, may also be used for economical design when ...

  19. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  20. Modeling the optimal management of spent nuclear fuel

    International Nuclear Information System (INIS)

    Nachlas, J.A.; Kurstedt, H.A. Jr.; Swindle, D.W. Jr.; Korcz, K.O.

    1977-01-01

    Recent governmental policy decisions dictate that strategies for managing spent nuclear fuel be developed. Two models are constructed to investigate the optimum residence time and the optimal inventory withdrawal policy for fuel material that presently must be stored. The mutual utility of the models is demonstrated through reference case application

  1. Variability aware compact model characterization for statistical circuit design optimization

    Science.gov (United States)

    Qiao, Ying; Qian, Kun; Spanos, Costas J.

    2012-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.

  2. Fuzzy Simulation-Optimization Model for Waste Load Allocation

    Directory of Open Access Journals (Sweden)

    Motahhare Saadatpour

    2006-01-01

    Full Text Available This paper present simulation-optimization models for waste load allocation from multiple point sources which include uncertainty due to vagueness of the parameters and goals. This model employs fuzzy sets with appropriate membership functions to deal with uncertainties due to vagueness. The fuzzy waste load allocation model (FWLAM incorporate QUAL2E as a water quality simulation model and Genetic Algorithm (GA as an optimization tool to find the optimal combination of the fraction removal level to the dischargers and pollution control agency (PCA. Penalty functions are employed to control the violations in the system.  The results demonstrate that the goal of PCA to achieve the best water quality and the goal of the dischargers to use the full assimilative capacity of the river have not been satisfied completely and a compromise solution between these goals is provided. This fuzzy optimization model with genetic algorithm has been used for a hypothetical problem. Results demonstrate a very suitable convergence of proposed optimization algorithm to the global optima.

  3. Optimization of photometric determination of U with arsenazo III for direct determination of U in steels, soils and waters

    International Nuclear Information System (INIS)

    Kosturiak, A.; Talanova, A.; Rurikova, D.; Kalavska, D.

    1984-01-01

    Conditions were optimized for the reaction of U(VI) with arsenazo III. Recommended as the optimal medium for photometric determination of uranium in the concentration range 0.5 to 50 μg U/ml was the glycine buffer with pH 1.2 to 2.2. The results of the suggested method have better reproducibility than those of the mineral acid procedure used so far. Complexone III should be added to mask the other cations accompanying uranium in steels, waters and rocks. (author)

  4. Decision Support Model for Optimal Management of Coastal Gate

    Science.gov (United States)

    Ditthakit, Pakorn; Chittaladakorn, Suwatana

    2010-05-01

    The coastal areas are intensely settled by human beings owing to their fertility of natural resources. However, at present those areas are facing with water scarcity problems: inadequate water and poor water quality as a result of saltwater intrusion and inappropriate land-use management. To solve these problems, several measures have been exploited. The coastal gate construction is a structural measure widely performed in several countries. This manner requires the plan for suitably operating coastal gates. Coastal gate operation is a complicated task and usually concerns with the management of multiple purposes, which are generally conflicted one another. This paper delineates the methodology and used theories for developing decision support modeling for coastal gate operation scheduling. The developed model was based on coupling simulation and optimization model. The weighting optimization technique based on Differential Evolution (DE) was selected herein for solving multiple objective problems. The hydrodynamic and water quality models were repeatedly invoked during searching the optimal gate operations. In addition, two forecasting models:- Auto Regressive model (AR model) and Harmonic Analysis model (HA model) were applied for forecasting water levels and tide levels, respectively. To demonstrate the applicability of the developed model, it was applied to plan the operations for hypothetical system of Pak Phanang coastal gate system, located in Nakhon Si Thammarat province, southern part of Thailand. It was found that the proposed model could satisfyingly assist decision-makers for operating coastal gates under various environmental, ecological and hydraulic conditions.

  5. Spatial optimization of watershed management practices for nitrogen load reduction using a modeling-optimization framework

    Science.gov (United States)

    Best management practices (BMPs) are perceived as being effective in reducing nutrient loads transported from non-point sources (NPS) to receiving water bodies. The objective of this study was to develop a modeling-optimization framework that can be used by watershed management p...

  6. OPTIMIZING CONDITIONS FOR SPECTROPHOTOMETRIC DETERMINATION OF TOTAL POLYPHENOLS IN WINES USING FOLIN-CIOCALTEU REAGENT

    Directory of Open Access Journals (Sweden)

    Daniel Bajčan

    2013-02-01

    Full Text Available Wine is a complex beverage that obtains its properties mainly due to synergistic effect of alcohol, organic acids, arbohydrates, as well as the phenolic and aromatic substances. At present days, we can observe an increased interest in the study of polyphenols in wines that have antioxidant, antimicrobial, anti-inflammatory, anti-cancer and many other beneficial effects. Moderate and regular consumption of the red wine especially, with a high content of phenolic compounds, has a beneficial effect on human health. The aim of this work was to optimize conditions for spectrophotometric determination of total polyphenols in winwas to optimize conditions for spectrophotometric determination of total polyphenols in winwas to optimize conditions for spectrophotometric determination of total polyphenols in winwas to optimize conditions for pectrophotometric determination of total polyphenols in wine using Folin-Ciocaulteu reagent. Based on several studies, in order to minimize chemical use and optimize analysis time, we have proposed a method for the determination of total polyphenols using 0.25 ml Folin-Ciocaulteu reagent, 3 ml of 20% Na2CO3 solution and time of coloring complex 1.5 hour. We f

  7. Modeling and energy efficiency optimization of belt conveyors

    International Nuclear Information System (INIS)

    Zhang, Shirong; Xia, Xiaohua

    2011-01-01

    Highlights: → We take optimization approach to improve operation efficiency of belt conveyors. → An analytical energy model, originating from ISO 5048, is proposed. → Then an off-line and an on-line parameter estimation schemes are investigated. → In a case study, six optimization problems are formulated with solutions in simulation. - Abstract: The improvement of the energy efficiency of belt conveyor systems can be achieved at equipment and operation levels. Specifically, variable speed control, an equipment level intervention, is recommended to improve operation efficiency of belt conveyors. However, the current implementations mostly focus on lower level control loops without operational considerations at the system level. This paper intends to take a model based optimization approach to improve the efficiency of belt conveyors at the operational level. An analytical energy model, originating from ISO 5048, is firstly proposed, which lumps all the parameters into four coefficients. Subsequently, both an off-line and an on-line parameter estimation schemes are applied to identify the new energy model, respectively. Simulation results are presented for the estimates of the four coefficients. Finally, optimization is done to achieve the best operation efficiency of belt conveyors under various constraints. Six optimization problems of a typical belt conveyor system are formulated, respectively, with solutions in simulation for a case study.

  8. A stochastic discrete optimization model for designing container terminal facilities

    Science.gov (United States)

    Zukhruf, Febri; Frazila, Russ Bona; Burhani, Jzolanda Tsavalista

    2017-11-01

    As uncertainty essentially affect the total transportation cost, it remains important in the container terminal that incorporates several modes and transshipments process. This paper then presents a stochastic discrete optimization model for designing the container terminal, which involves the decision of facilities improvement action. The container terminal operation model is constructed by accounting the variation of demand and facilities performance. In addition, for illustrating the conflicting issue that practically raises in the terminal operation, the model also takes into account the possible increment delay of facilities due to the increasing number of equipment, especially the container truck. Those variations expectantly reflect the uncertainty issue in the container terminal operation. A Monte Carlo simulation is invoked to propagate the variations by following the observed distribution. The problem is constructed within the framework of the combinatorial optimization problem for investigating the optimal decision of facilities improvement. A new variant of glow-worm swarm optimization (GSO) is thus proposed for solving the optimization, which is rarely explored in the transportation field. The model applicability is tested by considering the actual characteristics of the container terminal.

  9. Hierarchical Swarm Model: A New Approach to Optimization

    Directory of Open Access Journals (Sweden)

    Hanning Chen

    2010-01-01

    Full Text Available This paper presents a novel optimization model called hierarchical swarm optimization (HSO, which simulates the natural hierarchical complex system from where more complex intelligence can emerge for complex problems solving. This proposed model is intended to suggest ways that the performance of HSO-based algorithms on complex optimization problems can be significantly improved. This performance improvement is obtained by constructing the HSO hierarchies, which means that an agent in a higher level swarm can be composed of swarms of other agents from lower level and different swarms of different levels evolve on different spatiotemporal scale. A novel optimization algorithm (named PS2O, based on the HSO model, is instantiated and tested to illustrate the ideas of HSO model clearly. Experiments were conducted on a set of 17 benchmark optimization problems including both continuous and discrete cases. The results demonstrate remarkable performance of the PS2O algorithm on all chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms.

  10. Determination of cognitive development: postnonclassical theoretical model

    Directory of Open Access Journals (Sweden)

    Irina N. Pogozhina

    2015-09-01

    Full Text Available The aim of this research is to develop a postnonclassical cognitive processes content determination model in which mental processes are considered as open selfdeveloping, self-organizing systems. Three types of systems (dynamic, statistical, developing were analysed and compared on the basis of the description of the external and internal characteristics of causation, types of causal chains (dependent, independent and their interactions, as well as the nature of the relationship between the elements of the system (hard, probabilistic, mixed. Mechanisms of open non-equilibrium nonlinear systems (dissipative and four dissipative structures emergence conditions are described. Determination models of mental and behaviour formation and development that were developed under various theoretical approaches (associationism, behaviorism, gestaltism, psychology of intelligence by Piaget, Vygotsky culture historical approach, activity approach and others are mapped on each other as the models that describe behaviour of the three system types mentioned above. The development models of the mental sphere are shown to be different by the following criteria: 1 allocated determinants amount; 2 presence or absence of the system own activity that results in selecting the model not only external, but also internal determinants; 3 types of causal chains (dependent-independent-blended; 4 types of relationships between the causal chain that ultimately determines the subsequent system determination type as decisive (a tough dynamic pattern or stochastic (statistical regularity. The continuity of postnonclassical, classical and non-classical models of mental development determination are described. The process of gradual refinement, complexity, «absorption» of the mental determination by the latter models is characterized. The human mental can be deemed as the functioning of the open developing non-equilibrium nonlinear system (dissipative. The mental sphere is

  11. Optimization of turning process through the analytic flank wear modelling

    Science.gov (United States)

    Del Prete, A.; Franchi, R.; De Lorenzis, D.

    2018-05-01

    In the present work, the approach used for the optimization of the process capabilities for Oil&Gas components machining will be described. These components are machined by turning of stainless steel castings workpieces. For this purpose, a proper Design Of Experiments (DOE) plan has been designed and executed: as output of the experimentation, data about tool wear have been collected. The DOE has been designed starting from the cutting speed and feed values recommended by the tools manufacturer; the depth of cut parameter has been maintained as a constant. Wear data has been obtained by means the observation of the tool flank wear under an optical microscope: the data acquisition has been carried out at regular intervals of working times. Through a statistical data and regression analysis, analytical models of the flank wear and the tool life have been obtained. The optimization approach used is a multi-objective optimization, which minimizes the production time and the number of cutting tools used, under the constraint on a defined flank wear level. The technique used to solve the optimization problem is a Multi Objective Particle Swarm Optimization (MOPS). The optimization results, validated by the execution of a further experimental campaign, highlighted the reliability of the work and confirmed the usability of the optimized process parameters and the potential benefit for the company.

  12. An aircraft noise pollution model for trajectory optimization

    Science.gov (United States)

    Barkana, A.; Cook, G.

    1976-01-01

    A mathematical model describing the generation of aircraft noise is developed with the ultimate purpose of reducing noise (noise-optimizing landing trajectories) in terminal areas. While the model is for a specific aircraft (Boeing 737), the methodology would be applicable to a wide variety of aircraft. The model is used to obtain a footprint on the ground inside of which the noise level is at or above 70 dB.

  13. Optimization algorithms intended for self-tuning feedwater heater model

    International Nuclear Information System (INIS)

    Czop, P; Barszcz, T; Bednarz, J

    2013-01-01

    This work presents a self-tuning feedwater heater model. This work continues the work on first-principle gray-box methodology applied to diagnostics and condition assessment of power plant components. The objective of this work is to review and benchmark the optimization algorithms regarding the time required to achieve the best model fit to operational power plant data. The paper recommends the most effective algorithm to be used in the model adjustment process.

  14. Group Elevator Peak Scheduling Based on Robust Optimization Model

    Directory of Open Access Journals (Sweden)

    ZHANG, J.

    2013-08-01

    Full Text Available Scheduling of Elevator Group Control System (EGCS is a typical combinatorial optimization problem. Uncertain group scheduling under peak traffic flows has become a research focus and difficulty recently. RO (Robust Optimization method is a novel and effective way to deal with uncertain scheduling problem. In this paper, a peak scheduling method based on RO model for multi-elevator system is proposed. The method is immune to the uncertainty of peak traffic flows, optimal scheduling is realized without getting exact numbers of each calling floor's waiting passengers. Specifically, energy-saving oriented multi-objective scheduling price is proposed, RO uncertain peak scheduling model is built to minimize the price. Because RO uncertain model could not be solved directly, RO uncertain model is transformed to RO certain model by elevator scheduling robust counterparts. Because solution space of elevator scheduling is enormous, to solve RO certain model in short time, ant colony solving algorithm for elevator scheduling is proposed. Based on the algorithm, optimal scheduling solutions are found quickly, and group elevators are scheduled according to the solutions. Simulation results show the method could improve scheduling performances effectively in peak pattern. Group elevators' efficient operation is realized by the RO scheduling method.

  15. Development of optimized dosimetric models for HDR brachytherapy

    International Nuclear Information System (INIS)

    Thayalan, K.; Jagadeesan, M.

    2003-01-01

    High dose rate brachytherapy (HDRB) systems are in clinical use for more than four decades particularly in cervical cancer. Optimization is the method to produce dose distribution which assures that doses are not compromised at the treatment sites whilst reducing the risk of overdosing critical organs. Hence HDRB optimization begins with the desired dose distribution and requires the calculations of the relative weighting factors for each dwell position with out changing the source activity. The optimization for Ca. uterine cervix treatment is simply duplication of the dose distribution used for Low dose rate (LDR) applications. In the present work, two optimized dosimetric models were proposed and studied thoroughly, to suit the local clinical conditions. These models are named as HDR-C and HDR-D, where C and D represent configuration and distance respectively. These models duplicate exactly the LDR pear shaped dose distribution, which is a golden standard. The validity of these models is tested in different clinical situations and in actual patients (n=92). These models: HDR-C and HDR-D reduce bladder dose by 11.11% and 10% and rectal dose by 8% and 7% respectively. The treatment time is also reduced by 12-14%. In a busy hospital setup, these models find a place to cater large number of patients, while addressing individual patients geometry. (author)

  16. Creative Destruction and Optimal Patent Life in a Variety-Expanding Growth Model

    OpenAIRE

    Lin, Hwan C.

    2013-01-01

    This paper presents more channels through which the optimal patent life is determined in a R&D-based endogenous growth model that permits growth of new varieties of consumer goods over time. Its modeling features include an endogenous hazard rate facing incumbent monopolists, the prevalence of research congestion, and the aggregate welfare importance of product differentiation. As a result, a patent’s effective life is endogenized and less than its legal life. The model is calibrated to a glo...

  17. DETERMINATION OF BRAKING OPTIMAL MODE OF CONTROLLED CUT OF DESIGN GROUP

    Directory of Open Access Journals (Sweden)

    A. S. Dorosh

    2015-06-01

    Full Text Available Purpose. The application of automation systems of breaking up process on the gravity hump is the efficiency improvement of their operation, absolute provision of trains breaking up safety demands, as well as improvement of hump staff working conditions. One of the main tasks of the indicated systems is the assurance of cuts reliable separation at all elements of their rolling route to the classification track. This task is a sophisticated optimization problem and has not received a final decision. Therefore, the task of determining the cuts braking mode is quite relevant. The purpose of this research is to find the optimal braking mode of control cut of design group. Methodology. In order to achieve the purpose is offered to use the direct search methods in the work, namely the Box complex method. This method does not require smoothness of the objective function, takes into account its limitations and does not require calculation of the function derivatives, and uses only its value. Findings. Using the Box method was developed iterative procedure for determining the control cut optimal braking mode of design group. The procedure maximizes the smallest controlled time interval in the group. To evaluate the effectiveness of designed procedure the series of simulation experiments of determining the control cut braking mode of design group was performed. The results confirmed the efficiency of the developed optimization procedure. Originality. The author formalized the task of optimizing control cut braking mode of design group, taking into account the cuts separation of design group at all elements (switches, retarders during cuts rolling to the classification track. The problem of determining the optimal control cut braking mode of design group was solved. The developed braking mode ensures cuts reliable separation of the group not only at the switches but at the retarders of brake position. Practical value. The developed procedure can be

  18. Global Optimization of Ventricular Myocyte Model to Multi-Variable Objective Improves Predictions of Drug-Induced Torsades de Pointes

    Directory of Open Access Journals (Sweden)

    Trine Krogh-Madsen

    2017-12-01

    Full Text Available In silico cardiac myocyte models present powerful tools for drug safety testing and for predicting phenotypical consequences of ion channel mutations, but their accuracy is sometimes limited. For example, several models describing human ventricular electrophysiology perform poorly when simulating effects of long QT mutations. Model optimization represents one way of obtaining models with stronger predictive power. Using a recent human ventricular myocyte model, we demonstrate that model optimization to clinical long QT data, in conjunction with physiologically-based bounds on intracellular calcium and sodium concentrations, better constrains model parameters. To determine if the model optimized to congenital long QT data better predicts risk of drug-induced long QT arrhythmogenesis, in particular Torsades de Pointes risk, we tested the optimized model against a database of known arrhythmogenic and non-arrhythmogenic ion channel blockers. When doing so, the optimized model provided an improved risk assessment. In particular, we demonstrate an elimination of false-positive outcomes generated by the baseline model, in which simulations of non-torsadogenic drugs, in particular verapamil, predict action potential prolongation. Our results underscore the importance of currents beyond those directly impacted by a drug block in determining torsadogenic risk. Our study also highlights the need for rich data in cardiac myocyte model optimization and substantiates such optimization as a method to generate models with higher accuracy of predictions of drug-induced cardiotoxicity.

  19. Discounted cost model for condition-based maintenance optimization

    International Nuclear Information System (INIS)

    Weide, J.A.M. van der; Pandey, M.D.; Noortwijk, J.M. van

    2010-01-01

    This paper presents methods to evaluate the reliability and optimize the maintenance of engineering systems that are damaged by shocks or transients arriving randomly in time and overall degradation is modeled as a cumulative stochastic point process. The paper presents a conceptually clear and comprehensive derivation of formulas for computing the discounted cost associated with a maintenance policy combining both condition-based and age-based criteria for preventive maintenance. The proposed discounted cost model provides a more realistic basis for optimizing the maintenance policies than those based on the asymptotic, non-discounted cost rate criterion.

  20. Optimization of morphing flaps based on fluid structure interaction modeling

    DEFF Research Database (Denmark)

    Barlas, Athanasios; Akay, Busra

    2018-01-01

    This article describes the design optimization of morphing trailing edge flaps for wind turbines with ‘smart blades’. A high fidelity Fluid Structure Interaction (FSI) simulation framework is utilized, comprised of 2D Finite Element Analysis (FEA) and Computational Fluid Dynamics (CFD) models....... A coupled aero-structural simulation of a 10% chordwise length morphing trailing edge flap for a 4 MW wind turbine rotor is carried out and response surfaces are produced with respect to the flap internal geometry design parameters for the design conditions. Surrogate model based optimization is applied...

  1. Innovative supply chain optimization models with multiple uncertainty factors

    DEFF Research Database (Denmark)

    Choi, Tsan Ming; Govindan, Kannan; Li, Xiang

    2017-01-01

    Uncertainty is an inherent factor that affects all dimensions of supply chain activities. In today’s business environment, initiatives to deal with one specific type of uncertainty might not be effective since other types of uncertainty factors and disruptions may be present. These factors relate...... to supply chain competition and coordination. Thus, to achieve a more efficient and effective supply chain requires the deployment of innovative optimization models and novel methods. This preface provides a concise review of critical research issues regarding innovative supply chain optimization models...

  2. Optimization of DNA Sensor Model Based Nanostructured Graphene Using Particle Swarm Optimization Technique

    Directory of Open Access Journals (Sweden)

    Hediyeh Karimi

    2013-01-01

    Full Text Available It has been predicted that the nanomaterials of graphene will be among the candidate materials for postsilicon electronics due to their astonishing properties such as high carrier mobility, thermal conductivity, and biocompatibility. Graphene is a semimetal zero gap nanomaterial with demonstrated ability to be employed as an excellent candidate for DNA sensing. Graphene-based DNA sensors have been used to detect the DNA adsorption to examine a DNA concentration in an analyte solution. In particular, there is an essential need for developing the cost-effective DNA sensors holding the fact that it is suitable for the diagnosis of genetic or pathogenic diseases. In this paper, particle swarm optimization technique is employed to optimize the analytical model of a graphene-based DNA sensor which is used for electrical detection of DNA molecules. The results are reported for 5 different concentrations, covering a range from 0.01 nM to 500 nM. The comparison of the optimized model with the experimental data shows an accuracy of more than 95% which verifies that the optimized model is reliable for being used in any application of the graphene-based DNA sensor.

  3. Multi-model Simulation for Optimal Control of Aeroacoustics.

    Energy Technology Data Exchange (ETDEWEB)

    Collis, Samuel Scott; Chen, Guoquan

    2005-05-01

    Flow-generated noise, especially rotorcraft noise has been a serious concern for bothcommercial and military applications. A particular important noise source for rotor-craft is Blade-Vortex-Interaction (BVI)noise, a high amplitude, impulsive sound thatoften dominates other rotorcraft noise sources. Usually BVI noise is caused by theunsteady flow changes around various rotor blades due to interactions with vorticespreviously shed by the blades. A promising approach for reducing the BVI noise isto use on-blade controls, such as suction/blowing, micro-flaps/jets, and smart struc-tures. Because the design and implementation of such experiments to evaluate suchsystems are very expensive, efficient computational tools coupled with optimal con-trol systems are required to explore the relevant physics and evaluate the feasibilityof using various micro-fluidic devices before committing to hardware.In this thesis the research is to formulate and implement efficient computationaltools for the development and study of optimal control and design strategies for com-plex flow and acoustic systems with emphasis on rotorcraft applications, especiallyBVI noise control problem. The main purpose of aeroacoustic computations is todetermine the sound intensity and directivity far away from the noise source. How-ever, the computational cost of using a high-fidelity flow-physics model across thefull domain is usually prohibitive and itmight also be less accurate because of thenumerical diffusion and other problems. Taking advantage of the multi-physics andmulti-scale structure of this aeroacoustic problem, we develop a multi-model, multi-domain (near-field/far-field) method based on a discontinuous Galerkin discretiza-tion. In this approach the coupling of multi-domains and multi-models is achievedby weakly enforcing continuity of normal fluxes across a coupling surface. For ourinterested aeroacoustics control problem, the adjoint equations that determine thesensitivity of the cost

  4. A kriging metamodel-assisted robust optimization method based on a reverse model

    Science.gov (United States)

    Zhou, Hui; Zhou, Qi; Liu, Congwei; Zhou, Taotao

    2018-02-01

    The goal of robust optimization methods is to obtain a solution that is both optimum and relatively insensitive to uncertainty factors. Most existing robust optimization approaches use outer-inner nested optimization structures where a large amount of computational effort is required because the robustness of each candidate solution delivered from the outer level should be evaluated in the inner level. In this article, a kriging metamodel-assisted robust optimization method based on a reverse model (K-RMRO) is first proposed, in which the nested optimization structure is reduced into a single-loop optimization structure to ease the computational burden. Ignoring the interpolation uncertainties from kriging, K-RMRO may yield non-robust optima. Hence, an improved kriging-assisted robust optimization method based on a reverse model (IK-RMRO) is presented to take the interpolation uncertainty of kriging metamodel into consideration. In IK-RMRO, an objective switching criterion is introduced to determine whether the inner level robust optimization or the kriging metamodel replacement should be used to evaluate the robustness of design alternatives. The proposed criterion is developed according to whether or not the robust status of the individual can be changed because of the interpolation uncertainties from the kriging metamodel. Numerical and engineering cases are used to demonstrate the applicability and efficiency of the proposed approach.

  5. Hyperopt: a Python library for model selection and hyperparameter optimization

    Science.gov (United States)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  6. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    Science.gov (United States)

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  7. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design-Part I. Model development

    Energy Technology Data Exchange (ETDEWEB)

    He, L., E-mail: li.he@ryerson.ca [Department of Civil Engineering, Faculty of Engineering, Architecture and Science, Ryerson University, 350 Victoria Street, Toronto, Ontario, M5B 2K3 (Canada); Huang, G.H. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada); College of Urban Environmental Sciences, Peking University, Beijing 100871 (China); Lu, H.W. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada)

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the 'true' ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.

  8. Determination of hydraulic properties of unsaturated soil via inverse modeling

    International Nuclear Information System (INIS)

    Kodesova, R.

    2004-01-01

    The method for determining the hydraulic properties of unsaturated soil with inverse modeling is presented. A modified cone penetrometer has been designed to inject water into the soil through a screen, and measure the progress of the wetting front with two tensiometer rings positioned above the screen. Cumulative inflow and pressure head readings are analyzed to obtain estimates of the hydraulic parameters describing K(h) and θ(h). Optimization results for tests at one side are used to demonstrate the possibility to evaluate either the wetting branches of the soil hydraulic properties, or the wetting and drying curves simultaneously, via analysis of different parts of the experiment. The optimization results are compared to the results of standard laboratory and field methods. (author)

  9. A model for optimization of process integration investments under uncertainty

    International Nuclear Information System (INIS)

    Svensson, Elin; Stroemberg, Ann-Brith; Patriksson, Michael

    2011-01-01

    The long-term economic outcome of energy-related industrial investment projects is difficult to evaluate because of uncertain energy market conditions. In this article, a general, multistage, stochastic programming model for the optimization of investments in process integration and industrial energy technologies is proposed. The problem is formulated as a mixed-binary linear programming model where uncertainties are modelled using a scenario-based approach. The objective is to maximize the expected net present value of the investments which enables heat savings and decreased energy imports or increased energy exports at an industrial plant. The proposed modelling approach enables a long-term planning of industrial, energy-related investments through the simultaneous optimization of immediate and later decisions. The stochastic programming approach is also suitable for modelling what is possibly complex process integration constraints. The general model formulation presented here is a suitable basis for more specialized case studies dealing with optimization of investments in energy efficiency. -- Highlights: → Stochastic programming approach to long-term planning of process integration investments. → Extensive mathematical model formulation. → Multi-stage investment decisions and scenario-based modelling of uncertain energy prices. → Results illustrate how investments made now affect later investment and operation opportunities. → Approach for evaluation of robustness with respect to variations in probability distribution.

  10. Combining spatial modeling and choice experiments for the optimal spatial allocation of wind turbines

    International Nuclear Information System (INIS)

    Drechsler, Martin; Ohl, Cornelia; Meyerhoff, Juergen; Eichhorn, Marcus; Monsees, Jan

    2011-01-01

    Although wind power is currently the most efficient source of renewable energy, the installation of wind turbines (WT) in landscapes often leads to conflicts in the affected communities. We propose that such conflicts can be mitigated by a welfare-optimal spatial allocation of WT in the landscape so that a given energy target is reached at minimum social costs. The energy target is motivated by the fact that wind power production is associated with relatively low CO 2 emissions. Social costs comprise energy production costs as well as external costs caused by harmful impacts on humans and biodiversity. We present a modeling approach that combines spatially explicit ecological-economic modeling and choice experiments to determine the welfare-optimal spatial allocation of WT in West Saxony, Germany. The welfare-optimal sites balance production and external costs. Results indicate that in the welfare-optimal allocation the external costs represent about 14% of the total costs (production costs plus external costs). Optimizing wind power production without consideration of the external costs would lead to a very different allocation of WT that would marginally reduce the production costs but strongly increase the external costs and thus lead to substantial welfare losses. - Highlights: → We combine modeling and economic valuation to optimally allocate wind turbines. → Welfare-optimal allocation balances energy production costs and external costs. → External costs (impacts on the environment) can be substantial. → Ignoring external costs leads to suboptimal allocations and welfare losses.

  11. DETERMINATION ОF DRESS ROLL OPTIMAL RADIUS WHILE PRODUCING PARTS WITH TROCHOIDAL PROFILE

    Directory of Open Access Journals (Sweden)

    E. N. Yankevich

    2008-01-01

    Full Text Available The paper considers determination of the dress roll optimal radius while producing parts having trohoidal profile with the help of grinding method that presupposes application of grinding disk. In this case disk profile has been cut-in by diamond dressing. Two methods for determination of calculation of the dress roll optimal radius have been proposed in the paper. On the basis of the satellite gear of the planetary pin reducer whose profile presents a trochoid it has been shown that the obtained results pertaining to two proposed methods conform with each other.

  12. OPTIMAL TRAINING POLICY FOR PROMOTION - STOCHASTIC MODELS OF MANPOWER SYSTEMS

    Directory of Open Access Journals (Sweden)

    V.S.S. Yadavalli

    2012-01-01

    Full Text Available In this paper, the optimal planning of manpower training programmes in a manpower system with two grades is discussed. The planning of manpower training within a given organization involves a trade-off between training costs and expected return. These planning problems are examined through models that reflect the random nature of manpower movement in two grades. To be specific, the system consists of two grades, grade 1 and grade 2. Any number of persons in grade 2 can be sent for training and after the completion of training, they will stay in grade 2 and will be given promotion as and when vacancies arise in grade 1. Vacancies arise in grade 1 only by wastage. A person in grade 1 can leave the system with probability p. Vacancies are filled with persons in grade 2 who have completed the training. It is assumed that there is a perfect passing rate and that the sizes of both grades are fixed. Assuming that the planning horizon is finite and is T, the underlying stochastic process is identified as a finite state Markov chain and using dynamic programming, a policy is evolved to determine how many persons should be sent for training at any time k so as to minimize the total expected cost for the entire planning period T.

  13. Optimal Model-Based Control in HVAC Systems

    DEFF Research Database (Denmark)

    Komareji, Mohammad; Stoustrup, Jakob; Rasmussen, Henrik

    2008-01-01

    is developed. Then the optimal control structure is designed and implemented. The HVAC system is splitted into two subsystems. By selecting the right set-points and appropriate cost functions for each subsystem controller the optimal control strategy is respected to gaurantee the minimum thermal and electrical......This paper presents optimal model-based control of a heating, ventilating, and air-conditioning (HVAC) system. This HVAC system is made of two heat exchangers: an air-to-air heat exchanger (a rotary wheel heat recovery) and a water-to- air heat exchanger. First dynamic model of the HVAC system...... energy consumption. Finally, the controller is applied to control the mentioned HVAC system and the results show that the expected goals are fulfilled....

  14. Optimization model for rotor blades of horizontal axis wind turbines

    Institute of Scientific and Technical Information of China (English)

    LIU Xiong; CHEN Yan; YE Zhiquan

    2007-01-01

    This paper presents an optimization model for rotor blades of horizontal axis wind turbines. The model refers to the wind speed distribution function on the specific wind site, with an objective to satisfy the maximum annual energy output. To speed up the search process and guarantee a global optimal result, the extended compact genetic algorithm (ECGA) is used to carry out the search process.Compared with the simple genetic algorithm, ECGA runs much faster and can get more accurate results with a much smaller population size and fewer function evaluations. Using the developed optimization program, blades of a 1.3 MW stall-regulated wind turbine are designed. Compared with the existing blades, the designed blades have obviously better aerodynamic performance.

  15. Modeling of biological intelligence for SCM system optimization.

    Science.gov (United States)

    Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang

    2012-01-01

    This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms.

  16. Optimization of Excitation in FDTD Method and Corresponding Source Modeling

    Directory of Open Access Journals (Sweden)

    B. Dimitrijevic

    2015-04-01

    Full Text Available Source and excitation modeling in FDTD formulation has a significant impact on the method performance and the required simulation time. Since the abrupt source introduction yields intensive numerical variations in whole computational domain, a generally accepted solution is to slowly introduce the source, using appropriate shaping functions in time. The main goal of the optimization presented in this paper is to find balance between two opposite demands: minimal required computation time and acceptable degradation of simulation performance. Reducing the time necessary for source activation and deactivation is an important issue, especially in design of microwave structures, when the simulation is intensively repeated in the process of device parameter optimization. Here proposed optimized source models are realized and tested within an own developed FDTD simulation environment.

  17. Modeling of Biological Intelligence for SCM System Optimization

    Directory of Open Access Journals (Sweden)

    Shengyong Chen

    2012-01-01

    Full Text Available This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms.

  18. Modeling of Biological Intelligence for SCM System Optimization

    Science.gov (United States)

    Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang

    2012-01-01

    This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms. PMID:22162724

  19. Pareto-Optimal Model Selection via SPRINT-Race.

    Science.gov (United States)

    Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2018-02-01

    In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.

  20. Comparisons of criteria in the assessment model parameter optimizations

    International Nuclear Information System (INIS)

    Liu Xinhe; Zhang Yongxing

    1993-01-01

    Three criteria (chi square, relative chi square and correlation coefficient) used in model parameter optimization (MPO) process that aims at significant reduction of prediction uncertainties were discussed and compared to each other with the aid of a well-controlled tracer experiment

  1. The Optimal Portfolio Selection Model under g-Expectation

    Directory of Open Access Journals (Sweden)

    Li Li

    2014-01-01

    complicated and sophisticated, the optimal solution turns out to be surprisingly simple, the payoff of a portfolio of two binary claims. Also I give the economic meaning of my model and the comparison with that one in the work of Jin and Zhou, 2008.

  2. Real-Time Optimization for Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Edlund, Kristian; Frison, Gianluca

    2012-01-01

    In this paper, we develop an efficient homogeneous and self-dual interior-point method for the linear programs arising in economic model predictive control. To exploit structure in the optimization problems, the algorithm employs a highly specialized Riccati iteration procedure. Simulations show...

  3. Multiscale modeling and topology optimization of poroelastic actuators

    DEFF Research Database (Denmark)

    Andreasen, Casper Schousboe; Sigmund, Ole

    2012-01-01

    This paper presents a method for design of optimized poroelastic materials which under internal pressurization turn into actuators for application in, for example, linear motors. The actuators are modeled in a two-scale fluid–structure interaction approach. The fluid saturated material microstruc...

  4. A study on a new algorithm to optimize ball mill system based on modeling and GA

    International Nuclear Information System (INIS)

    Wang Heng; Jia Minping; Huang Peng; Chen Zuoliang

    2010-01-01

    Aiming at the disadvantage of conventional optimization method for ball mill pulverizing system, a novel approach based on RBF neural network and genetic algorithm was proposed in the present paper. Firstly, the experiments and measurement for fill level based on vibration signals of mill shell was introduced. Then, main factors which affected the power consumption of ball mill pulverizing system were analyzed, and the input variables of RBF neural network were determined. RBF neural network was used to map the complex non-linear relationship between the electric consumption and process parameters and the non-linear model of power consumption was built. Finally, the model was optimized by genetic algorithm and the optimal work conditions of ball mill pulverizing system were determined. The results demonstrate that the method is reliable and practical, and can reduce the electric consumption obviously and effectively.

  5. A Multiobjective Optimization Model in Automotive Supply Chain Networks

    Directory of Open Access Journals (Sweden)

    Abdolhossein Sadrnia

    2013-01-01

    Full Text Available In the new decade, green investment decisions are attracting more interest in design supply chains due to the hidden economic benefits and environmental legislative barriers. In this paper, a supply chain network design problem with both economic and environmental concerns is presented. Therefore, a multiobjective optimization model that captures the trade-off between the total logistics cost and CO2 emissions is proposed. With regard to the complexity of logistic networks, a new multiobjective swarm intelligence algorithm known as a multiobjective Gravitational search algorithm (MOGSA has been implemented for solving the proposed mathematical model. To evaluate the effectiveness of the model, a comprehensive set of numerical experiments is explained. The results obtained show that the proposed model can be applied as an effective tool in strategic planning for optimizing cost and CO2 emissions in an environmentally friendly automotive supply chain.

  6. A Self-Adaptive Fuzzy c-Means Algorithm for Determining the Optimal Number of Clusters

    Science.gov (United States)

    Wang, Zhihao; Yi, Jing

    2016-01-01

    For the shortcoming of fuzzy c-means algorithm (FCM) needing to know the number of clusters in advance, this paper proposed a new self-adaptive method to determine the optimal number of clusters. Firstly, a density-based algorithm was put forward. The algorithm, according to the characteristics of the dataset, automatically determined the possible maximum number of clusters instead of using the empirical rule n and obtained the optimal initial cluster centroids, improving the limitation of FCM that randomly selected cluster centroids lead the convergence result to the local minimum. Secondly, this paper, by introducing a penalty function, proposed a new fuzzy clustering validity index based on fuzzy compactness and separation, which ensured that when the number of clusters verged on that of objects in the dataset, the value of clustering validity index did not monotonically decrease and was close to zero, so that the optimal number of clusters lost robustness and decision function. Then, based on these studies, a self-adaptive FCM algorithm was put forward to estimate the optimal number of clusters by the iterative trial-and-error process. At last, experiments were done on the UCI, KDD Cup 1999, and synthetic datasets, which showed that the method not only effectively determined the optimal number of clusters, but also reduced the iteration of FCM with the stable clustering result. PMID:28042291

  7. On the role of modeling parameters in IMRT plan optimization

    International Nuclear Information System (INIS)

    Krause, Michael; Scherrer, Alexander; Thieke, Christian

    2008-01-01

    The formulation of optimization problems in intensity-modulated radiotherapy (IMRT) planning comprises the choice of various values such as function-specific parameters or constraint bounds. In current inverse planning programs that yield a single treatment plan for each optimization, it is often unclear how strongly these modeling parameters affect the resulting plan. This work investigates the mathematical concepts of elasticity and sensitivity to deal with this problem. An artificial planning case with a horse-shoe formed target with different opening angles surrounding a circular risk structure is studied. As evaluation functions the generalized equivalent uniform dose (EUD) and the average underdosage below and average overdosage beyond certain dose thresholds are used. A single IMRT plan is calculated for an exemplary parameter configuration. The elasticity and sensitivity of each parameter are then calculated without re-optimization, and the results are numerically verified. The results show the following. (1) elasticity can quantify the influence of a modeling parameter on the optimization result in terms of how strongly the objective function value varies under modifications of the parameter value. It also can describe how strongly the geometry of the involved planning structures affects the optimization result. (2) Based on the current parameter settings and corresponding treatment plan, sensitivity analysis can predict the optimization result for modified parameter values without re-optimization, and it can estimate the value intervals in which such predictions are valid. In conclusion, elasticity and sensitivity can provide helpful tools in inverse IMRT planning to identify the most critical parameters of an individual planning problem and to modify their values in an appropriate way

  8. Optimal inference with suboptimal models: Addiction and active Bayesian inference

    Science.gov (United States)

    Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl

    2015-01-01

    When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321

  9. Galerkin v. discrete-optimal projection in nonlinear model reduction

    Energy Technology Data Exchange (ETDEWEB)

    Carlberg, Kevin Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Barone, Matthew Franklin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Antil, Harbir [George Mason Univ., Fairfax, VA (United States)

    2015-04-01

    Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes. We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.

  10. [Multi-mathematical modelings for compatibility optimization of Jiangzhi granules].

    Science.gov (United States)

    Yang, Ming; Zhang, Li; Ge, Yingli; Lu, Yanliu; Ji, Guang

    2011-12-01

    To investigate into the method of "multi activity index evaluation and combination optimized of mult-component" for Chinese herbal formulas. According to the scheme of uniform experimental design, efficacy experiment, multi index evaluation, least absolute shrinkage, selection operator (LASSO) modeling, evolutionary optimization algorithm, validation experiment, we optimized the combination of Jiangzhi granules based on the activity indexes of blood serum ALT, ALT, AST, TG, TC, HDL, LDL and TG level of liver tissues, ratio of liver tissue to body. Analytic hierarchy process (AHP) combining with criteria importance through intercriteria correlation (CRITIC) for multi activity index evaluation was more reasonable and objective, it reflected the information of activity index's order and objective sample data. LASSO algorithm modeling could accurately reflect the relationship between different combination of Jiangzhi granule and the activity comprehensive indexes. The optimized combination of Jiangzhi granule showed better values of the activity comprehensive indexed than the original formula after the validation experiment. AHP combining with CRITIC can be used for multi activity index evaluation and LASSO algorithm, it is suitable for combination optimized of Chinese herbal formulas.

  11. Optimization in Fuzzy Economic Order Quantity (FEOQ Model with Deteriorating Inventory and Units Lost

    Directory of Open Access Journals (Sweden)

    Monalisha Pattnaik

    2014-09-01

    Full Text Available Background: This model presents the effect of deteriorating items in fuzzy optimal instantaneous replenishment for finite planning horizon. Accounting for holding cost per unit per unit time and ordering cost per order have traditionally been the case of modeling inventory systems in fuzzy environment. These imprecise parameters defined on a bounded interval on the axis of real numbers and the physical characteristics of stocked items dictate the nature of inventory policies implemented to manage and control in the production system.   Methods: The modified fuzzy EOQ (FEOQ model is introduced, it assumes that a percentage of the on-hand inventory is wasted due to deterioration and considered as an enhancement to EOQ model to determine the optimal replenishment quantity so that the net profit is maximized. In theoretical analysis, the necessary and sufficient conditions of the existence and uniqueness of the optimal solutions are proved and further the concavity of the fuzzy net profit function is established. Computational algorithm using the software LINGO 13.0 version is developed to find the optimal solution.   Results and conclusions: The results of the numerical analysis enable decision-makers to quantify the effect of units lost due to deterioration on optimizing the fuzzy net profit for the retailer. Finally, sensitivity analyses of the optimal solution with respect the major parameters are also carried out. Furthermore fuzzy decision making is shown to be superior then crisp decision making in terms of profit maximization. 

  12. Optimal Designs for the Generalized Partial Credit Model

    OpenAIRE

    Bürkner, Paul-Christian; Schwabe, Rainer; Holling, Heinz

    2018-01-01

    Analyzing ordinal data becomes increasingly important in psychology, especially in the context of item response theory. The generalized partial credit model (GPCM) is probably the most widely used ordinal model and finds application in many large scale educational assessment studies such as PISA. In the present paper, optimal test designs are investigated for estimating persons' abilities with the GPCM for calibrated tests when item parameters are known from previous studies. We will derive t...

  13. Experimental Modeling of Monolithic Resistors for Silicon ICS with a Robust Optimizer-Driving Scheme

    Directory of Open Access Journals (Sweden)

    Philippe Leduc

    2002-06-01

    Full Text Available Today, an exhaustive library of models describing the electrical behavior of integrated passive components in the radio-frequency range is essential for the simulation and optimization of complex circuits. In this work, a preliminary study has been done on Tantalum Nitride (TaN resistors integrated on silicon, and this leads to a single p-type lumped-element circuit. An efficient extraction technique will be presented to provide a computer-driven optimizer with relevant initial model parameter values (the "guess-timate". The results show the unicity in most cases of the lumped element determination, which leads to a precise simulation of self-resonant frequencies.

  14. Electromagnetic Vibration Energy Harvesting Devices Architectures, Design, Modeling and Optimization

    CERN Document Server

    Spreemann, Dirk

    2012-01-01

    Electromagnetic vibration transducers are seen as an effective way of harvesting ambient energy for the supply of sensor monitoring systems. Different electromagnetic coupling architectures have been employed but no comprehensive comparison with respect to their output performance has been carried out up to now. Electromagnetic Vibration Energy Harvesting Devices introduces an optimization approach which is applied to determine optimal dimensions of the components (magnet, coil and back iron). Eight different commonly applied coupling architectures are investigated. The results show that correct dimensions are of great significance for maximizing the efficiency of the energy conversion. A comparison yields the architectures with the best output performance capability which should be preferably employed in applications. A prototype development is used to demonstrate how the optimization calculations can be integrated into the design–flow. Electromagnetic Vibration Energy Harvesting Devices targets the design...

  15. A multidimensional model of optimal participation of children with physical disabilities.

    Science.gov (United States)

    Kang, Lin-Ju; Palisano, Robert J; King, Gillian A; Chiarello, Lisa A

    2014-01-01

    To present a conceptual model of optimal participation in recreational and leisure activities for children with physical disabilities. The conceptualization of the model was based on review of contemporary theories and frameworks, empirical research and the authors' practice knowledge. A case scenario is used to illustrate application to practice. The model proposes that optimal participation in recreational and leisure activities involves the dynamic interaction of multiple dimensions and determinants of participation. The three dimensions of participation are physical, social and self-engagement. Determinants of participation encompass attributes of the child, family and environment. Experiences of optimal participation are hypothesized to result in long-term benefits including better quality of life, a healthier lifestyle and emotional and psychosocial well-being. Consideration of relevant child, family and environment determinants of dimensions of optimal participation should assist children, families and health care professionals to identify meaningful goals and outcomes and guide the selection and implementation of innovative therapy approaches and methods of service delivery. Implications for Rehabilitation Optimal participation is proposed to involve the dynamic interaction of physical, social and self-engagement and attributes of the child, family and environment. The model emphasizes the importance of self-perceptions and participation experiences of children with physical disabilities. Optimal participation may have a positive influence on quality of life, a healthy lifestyle and emotional and psychosocial well-being. Knowledge of child, family, and environment determinants of physical, social and self-engagement should assist children, families and professionals in identifying meaningful goals and guiding innovative therapy approaches.

  16. Hybrid Modeling and Optimization of Yogurt Starter Culture Continuous Fermentation

    Directory of Open Access Journals (Sweden)

    Silviya Popova

    2009-10-01

    Full Text Available The present paper presents a hybrid model of yogurt starter mixed culture fermentation. The main nonlinearities within a classical structure of continuous process model are replaced by neural networks. The new hybrid model accounts for the dependence of the two microorganisms' kinetics from the on-line measured characteristics of the culture medium - pH. Then the model was used further for calculation of the optimal time profile of pH. The obtained results are with agreement with the experimental once.

  17. Modeling, estimation and optimal filtration in signal processing

    CERN Document Server

    Najim, Mohamed

    2010-01-01

    The purpose of this book is to provide graduate students and practitioners with traditional methods and more recent results for model-based approaches in signal processing.Firstly, discrete-time linear models such as AR, MA and ARMA models, their properties and their limitations are introduced. In addition, sinusoidal models are addressed.Secondly, estimation approaches based on least squares methods and instrumental variable techniques are presented.Finally, the book deals with optimal filters, i.e. Wiener and Kalman filtering, and adaptive filters such as the RLS, the LMS and the

  18. Cost optimization model and its heuristic genetic algorithms

    International Nuclear Information System (INIS)

    Liu Wei; Wang Yongqing; Guo Jilin

    1999-01-01

    Interest and escalation are large quantity in proportion to the cost of nuclear power plant construction. In order to optimize the cost, the mathematics model of cost optimization for nuclear power plant construction was proposed, which takes the maximum net present value as the optimization goal. The model is based on the activity networks of the project and is an NP problem. A heuristic genetic algorithms (HGAs) for the model was introduced. In the algorithms, a solution is represented with a string of numbers each of which denotes the priority of each activity for assigned resources. The HGAs with this encoding method can overcome the difficulty which is harder to get feasible solutions when using the traditional GAs to solve the model. The critical path of the activity networks is figured out with the concept of predecessor matrix. An example was computed with the HGAP programmed in C language. The results indicate that the model is suitable for the objectiveness, the algorithms is effective to solve the model

  19. A hydroeconomic modeling framework for optimal integrated management of forest and water

    Science.gov (United States)

    Garcia-Prats, Alberto; del Campo, Antonio D.; Pulido-Velazquez, Manuel

    2016-10-01

    Forests play a determinant role in the hydrologic cycle, with water being the most important ecosystem service they provide in semiarid regions. However, this contribution is usually neither quantified nor explicitly valued. The aim of this study is to develop a novel hydroeconomic modeling framework for assessing and designing the optimal integrated forest and water management for forested catchments. The optimization model explicitly integrates changes in water yield in the stands (increase in groundwater recharge) induced by forest management and the value of the additional water provided to the system. The model determines the optimal schedule of silvicultural interventions in the stands of the catchment in order to maximize the total net benefit in the system. Canopy cover and biomass evolution over time were simulated using growth and yield allometric equations specific for the species in Mediterranean conditions. Silvicultural operation costs according to stand density and canopy cover were modeled using local cost databases. Groundwater recharge was simulated using HYDRUS, calibrated and validated with data from the experimental plots. In order to illustrate the presented modeling framework, a case study was carried out in a planted pine forest (Pinus halepensis Mill.) located in south-western Valencia province (Spain). The optimized scenario increased groundwater recharge. This novel modeling framework can be used in the design of a "payment for environmental services" scheme in which water beneficiaries could contribute to fund and promote efficient forest management operations.

  20. Replica Analysis for Portfolio Optimization with Single-Factor Model

    Science.gov (United States)

    Shinzato, Takashi

    2017-06-01

    In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.

  1. Models and Algorithms for Container Vessel Stowage Optimization

    DEFF Research Database (Denmark)

    Delgado-Ortegon, Alberto

    .g., selection of vessels to buy that satisfy specific demands), through to operational decisions (e.g., selection of containers that optimize revenue, and stowing those containers into a vessel). This thesis addresses the question of whether it is possible to formulate stowage optimization models...... container of those to be loaded in a port should be placed in a vessel, i.e., to generate stowage plans. This thesis explores two different approaches to solve this problem, both follow a 2-phase decomposition that assigns containers to vessel sections in the first phase, i.e., master planning...

  2. Modelling of Rabies Transmission Dynamics Using Optimal Control Analysis

    Directory of Open Access Journals (Sweden)

    Joshua Kiddy K. Asamoah

    2017-01-01

    Full Text Available We examine an optimal way of eradicating rabies transmission from dogs into the human population, using preexposure prophylaxis (vaccination and postexposure prophylaxis (treatment due to public education. We obtain the disease-free equilibrium, the endemic equilibrium, the stability, and the sensitivity analysis of the optimal control model. Using the Latin hypercube sampling (LHS, the forward-backward sweep scheme and the fourth-order Range-Kutta numerical method predict that the global alliance for rabies control’s aim of working to eliminate deaths from canine rabies by 2030 is attainable through mass vaccination of susceptible dogs and continuous use of pre- and postexposure prophylaxis in humans.

  3. Characterization, Modeling, and Optimization of Light-Emitting Diode Systems

    DEFF Research Database (Denmark)

    Thorseth, Anders

    are simulated SPDs similar to traditional light sources, and with high light quality. As part of this work the techniques have been applied in practical illumination applications. The presented examples are historical artifacts and illumination of plants to increase photosynthesis....... comparing the chromaticity of the measured SPD with tted models, the deviation is found to be larger than the lower limit of human color perception. A method has been developed to optimize multicolored cluster LED systems with respect to light quality, using multi objective optimization. The results...

  4. Multi-objective analytical model for optimal sizing of stand-alone photovoltaic water pumping systems

    International Nuclear Information System (INIS)

    Olcan, Ceyda

    2015-01-01

    Highlights: • An analytical optimal sizing model is proposed for PV water pumping systems. • The objectives are chosen as deficiency of power supply and life-cycle costs. • The crop water requirements are estimated for a citrus tree yard in Antalya. • The optimal tilt angles are calculated for fixed, seasonal and monthly changes. • The sizing results showed the validity of the proposed analytical model. - Abstract: Stand-alone photovoltaic (PV) water pumping systems effectively use solar energy for irrigation purposes in remote areas. However the random variability and unpredictability of solar energy makes difficult the penetration of PV implementations and complicate the system design. An optimal sizing of these systems proves to be essential. This paper recommends a techno-economic optimization model to determine optimally the capacity of the components of PV water pumping system using a water storage tank. The proposed model is developed regarding the reliability and cost indicators, which are the deficiency of power supply probability and life-cycle costs, respectively. The novelty is that the proposed optimization model is analytically defined for two-objectives and it is able to find a compromise solution. The sizing of a stand-alone PV water pumping system comprises a detailed analysis of crop water requirements and optimal tilt angles. Besides the necessity of long solar radiation and temperature time series, the accurate forecasts of water supply needs have to be determined. The calculation of the optimal tilt angle for yearly, seasonally and monthly frequencies results in higher system efficiency. It is, therefore, suggested to change regularly the tilt angle in order to maximize solar energy output. The proposed optimal sizing model incorporates all these improvements and can accomplish a comprehensive optimization of PV water pumping systems. A case study is conducted considering the irrigation of citrus trees yard located in Antalya, Turkey

  5. Determination and optimization of the ζ potential in boron electrophoretic deposition on aluminium substrates

    International Nuclear Information System (INIS)

    Oliveira Sampa, M.H. de; Vinhas, L.A.; Pino, E.S.

    1991-05-01

    In this work we present an introduction of the electrophoretic process followed by a detailed experimental treatment of the technique used in the determination and optimization of the ζ-potential, mainly as a function of the electrolyte concentration, in a high purity boron electrophoretics deposition on aluminium substrates used as electrodes in neutron detectors. (author)

  6. Optimization in Activation Analysis by Means of Epithermal Neutrons. Determination of Molybdenum in Steel

    Energy Technology Data Exchange (ETDEWEB)

    Brune, D; Jirlow, J

    1963-12-15

    Optimization in activation analysis by means of selective activation with epithermal neutrons is discussed. This method was applied to the determination of molybdenum in a steel alloy without recourse to radiochemical separations. The sensitivity for this determination is estimated to be 10 ppm. With the common form of activation by means of thermal neutrons, the sensitivity would be about one-tenth of this. The sensitivity estimations are based on evaluation of the photo peak ratios of Mo-99/Fe-59.

  7. Optimal model-free prediction from multivariate time series

    Science.gov (United States)

    Runge, Jakob; Donner, Reik V.; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  8. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    Science.gov (United States)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration

  9. The case for repeatable analysis with energy economy optimization models

    International Nuclear Information System (INIS)

    DeCarolis, Joseph F.; Hunter, Kevin; Sreepathi, Sarat

    2012-01-01

    Energy economy optimization (EEO) models employ formal search techniques to explore the future decision space over several decades in order to deliver policy-relevant insights. EEO models are a critical tool for decision-makers who must make near-term decisions with long-term effects in the face of large future uncertainties. While the number of model-based analyses proliferates, insufficient attention is paid to transparency in model development and application. Given the complex, data-intensive nature of EEO models and the general lack of access to source code and data, many of the assumptions underlying model-based analysis are hidden from external observers. This paper discusses the simplifications and subjective judgments involved in the model building process, which cannot be fully articulated in journal papers, reports, or model documentation. In addition, we argue that for all practical purposes, EEO model-based insights cannot be validated through comparison to real world outcomes. As a result, modelers are left without credible metrics to assess a model's ability to deliver reliable insight. We assert that EEO models should be discoverable through interrogation of publicly available source code and data. In addition, third parties should be able to run a specific model instance in order to independently verify published results. Yet a review of twelve EEO models suggests that in most cases, replication of model results is currently impossible. We provide several recommendations to help develop and sustain a software framework for repeatable model analysis.

  10. On the complexity of determining tolerances for ->e--optimal solutions to min-max combinatorial optimization problems

    NARCIS (Netherlands)

    Ghosh, D.; Sierksma, G.

    2000-01-01

    Sensitivity analysis of e-optimal solutions is the problem of calculating the range within which a problem parameter may lie so that the given solution re-mains e-optimal. In this paper we study the sensitivity analysis problem for e-optimal solutions tocombinatorial optimization problems with

  11. Application of Particle Swarm Optimization Algorithm for Optimizing ANN Model in Recognizing Ripeness of Citrus

    Science.gov (United States)

    Diyana Rosli, Anis; Adenan, Nur Sabrina; Hashim, Hadzli; Ezan Abdullah, Noor; Sulaiman, Suhaimi; Baharudin, Rohaiza

    2018-03-01

    This paper shows findings of the application of Particle Swarm Optimization (PSO) algorithm in optimizing an Artificial Neural Network that could categorize between ripeness and unripeness stage of citrus suhuensis. The algorithm would adjust the network connections weights and adapt its values during training for best results at the output. Initially, citrus suhuensis fruit’s skin is measured using optically non-destructive method via spectrometer. The spectrometer would transmit VIS (visible spectrum) photonic light radiation to the surface (skin of citrus) of the sample. The reflected light from the sample’s surface would be received and measured by the same spectrometer in terms of reflectance percentage based on VIS range. These measured data are used to train and test the best optimized ANN model. The accuracy is based on receiver operating characteristic (ROC) performance. The result outcomes from this investigation have shown that the achieved accuracy for the optimized is 70.5% with a sensitivity and specificity of 60.1% and 80.0% respectively.

  12. Advanced Nuclear Fuel Cycle Transitions: Optimization, Modeling Choices, and Disruptions

    Science.gov (United States)

    Carlsen, Robert W.

    Many nuclear fuel cycle simulators have evolved over time to help understan the nuclear industry/ecosystem at a macroscopic level. Cyclus is one of th first fuel cycle simulators to accommodate larger-scale analysis with it liberal open-source licensing and first-class Linux support. Cyclus also ha features that uniquely enable investigating the effects of modeling choices o fuel cycle simulators and scenarios. This work is divided into thre experiments focusing on optimization, effects of modeling choices, and fue cycle uncertainty. Effective optimization techniques are developed for automatically determinin desirable facility deployment schedules with Cyclus. A novel method fo mapping optimization variables to deployment schedules is developed. Thi allows relationships between reactor types and scenario constraints to b represented implicitly in the variable definitions enabling the usage o optimizers lacking constraint support. It also prevents wasting computationa resources evaluating infeasible deployment schedules. Deployed power capacit over time and deployment of non-reactor facilities are also included a optimization variables There are many fuel cycle simulators built with different combinations o modeling choices. Comparing results between them is often difficult. Cyclus flexibility allows comparing effects of many such modeling choices. Reacto refueling cycle synchronization and inter-facility competition among othe effects are compared in four cases each using combinations of fleet of individually modeled reactors with 1-month or 3-month time steps. There are noticeable differences in results for the different cases. The larges differences occur during periods of constrained reactor fuel availability This and similar work can help improve the quality of fuel cycle analysi generally There is significant uncertainty associated deploying new nuclear technologie such as time-frames for technology availability and the cost of buildin advanced reactors

  13. Nuclear-fuel-cycle optimization: methods and modelling techniques

    International Nuclear Information System (INIS)

    Silvennoinen, P.

    1982-01-01

    This book present methods applicable to analyzing fuel-cycle logistics and optimization as well as in evaluating the economics of different reactor strategies. After an introduction to the phases of a fuel cycle, uranium cost trends are assessed in a global perspective. Subsequent chapters deal with the fuel-cycle problems faced by a power utility. The fuel-cycle models cover the entire cycle from the supply of uranium to the disposition of spent fuel. The chapter headings are: Nuclear Fuel Cycle, Uranium Supply and Demand, Basic Model of the LWR (light water reactor) Fuel Cycle, Resolution of Uncertainties, Assessment of Proliferation Risks, Multigoal Optimization, Generalized Fuel-Cycle Models, Reactor Strategy Calculations, and Interface with Energy Strategies. 47 references, 34 figures, 25 tables

  14. A Convex Optimization Model and Algorithm for Retinex

    Directory of Open Access Journals (Sweden)

    Qing-Nan Zhao

    2017-01-01

    Full Text Available Retinex is a theory on simulating and explaining how human visual system perceives colors under different illumination conditions. The main contribution of this paper is to put forward a new convex optimization model for Retinex. Different from existing methods, the main idea is to rewrite a multiplicative form such that the illumination variable and the reflection variable are decoupled in spatial domain. The resulting objective function involves three terms including the Tikhonov regularization of the illumination component, the total variation regularization of the reciprocal of the reflection component, and the data-fitting term among the input image, the illumination component, and the reciprocal of the reflection component. We develop an alternating direction method of multipliers (ADMM to solve the convex optimization model. Numerical experiments demonstrate the advantages of the proposed model which can decompose an image into the illumination and the reflection components.

  15. A model for HIV/AIDS pandemic with optimal control

    Science.gov (United States)

    Sule, Amiru; Abdullah, Farah Aini

    2015-05-01

    Human immunodeficiency virus and acquired immune deficiency syndrome (HIV/AIDS) is pandemic. It has affected nearly 60 million people since the detection of the disease in 1981 to date. In this paper basic deterministic HIV/AIDS model with mass action incidence function are developed. Stability analysis is carried out. And the disease free equilibrium of the basic model was found to be locally asymptotically stable whenever the threshold parameter (RO) value is less than one, and unstable otherwise. The model is extended by introducing two optimal control strategies namely, CD4 counts and treatment for the infective using optimal control theory. Numerical simulation was carried out in order to illustrate the analytic results.

  16. An Optimal Electric Dipole Antenna Model and Its Field Propagation

    Directory of Open Access Journals (Sweden)

    Yidong Xu

    2016-01-01

    Full Text Available An optimal electric dipole antennas model is presented and analyzed, based on the hemispherical grounding equivalent model and the superposition principle. The paper also presents a full-wave electromagnetic simulation for the electromagnetic field propagation in layered conducting medium, which is excited by the horizontal electric dipole antennas. Optimum frequency for field transmission in different depth is carried out and verified by the experimental results in comparison with previously reported simulation over a digital wireless Through-The-Earth communication system. The experimental results demonstrate that the dipole antenna grounding impedance and the output power can be efficiently reduced by using the optimal electric dipole antenna model and operating at the optimum frequency in a vertical transmission depth up to 300 m beneath the surface of the earth.

  17. Using genetic algorithm to solve a new multi-period stochastic optimization model

    Science.gov (United States)

    Zhang, Xin-Li; Zhang, Ke-Cun

    2009-09-01

    This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.

  18. Mathematical model of the metal mould surface temperature optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mlynek, Jaroslav, E-mail: jaroslav.mlynek@tul.cz; Knobloch, Roman, E-mail: roman.knobloch@tul.cz [Department of Mathematics, FP Technical University of Liberec, Studentska 2, 461 17 Liberec, The Czech Republic (Czech Republic); Srb, Radek, E-mail: radek.srb@tul.cz [Institute of Mechatronics and Computer Engineering Technical University of Liberec, Studentska 2, 461 17 Liberec, The Czech Republic (Czech Republic)

    2015-11-30

    The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.

  19. Mathematical model of the metal mould surface temperature optimization

    International Nuclear Information System (INIS)

    Mlynek, Jaroslav; Knobloch, Roman; Srb, Radek

    2015-01-01

    The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article

  20. Optimization of recurrent neural networks for time series modeling

    DEFF Research Database (Denmark)

    Pedersen, Morten With

    1997-01-01

    The present thesis is about optimization of recurrent neural networks applied to time series modeling. In particular is considered fully recurrent networks working from only a single external input, one layer of nonlinear hidden units and a li near output unit applied to prediction of discrete time...... series. The overall objective s are to improve training by application of second-order methods and to improve generalization ability by architecture optimization accomplished by pruning. The major topics covered in the thesis are: 1. The problem of training recurrent networks is analyzed from a numerical...... of solution obtained as well as computation time required. 3. A theoretical definition of the generalization error for recurrent networks is provided. This definition justifies a commonly adopted approach for estimating generalization ability. 4. The viability of pruning recurrent networks by the Optimal...

  1. Modeling, simulation and optimization for science and technology

    CERN Document Server

    Kuznetsov, Yuri; Neittaanmäki, Pekka; Pironneau, Olivier

    2014-01-01

    This volume contains thirteen articles on advances in applied mathematics and computing methods for engineering problems. Six papers are on optimization methods and algorithms with emphasis on problems with multiple criteria; four articles are on numerical methods for applied problems modeled with nonlinear PDEs; two contributions are on abstract estimates for error analysis; finally one paper deals with rare events in the context of uncertainty quantification. Applications include aerospace, glaciology and nonlinear elasticity. Herein is a selection of contributions from speakers at two conferences on applied mathematics held in June 2012 at the University of Jyväskylä, Finland. The first conference, “Optimization and PDEs with Industrial Applications” celebrated the seventieth birthday of Professor Jacques Périaux of the University of Jyväskylä and Polytechnic University of Catalonia (Barcelona Tech), and the second conference, “Optimization and PDEs with Applications” celebrated the seventy-fi...

  2. Combustion optimization and HCCI modeling for ultra low emission

    Energy Technology Data Exchange (ETDEWEB)

    Koten, Hasan; Yilmaz, Mustafa; Zafer Gul, M. [Marmara University Mechanical Engineering Department (Turkey)], E-mail: hasan.koten@marmara.edu.tr

    2011-07-01

    With the coming shortage of fossil fuels and the rising concerns over the environment it is important to develop new technologies both to reduce energy consumption and pollution at the same time. In the transportation sector, new combustion processes are under development to provide clean diesel combustion with no particulate or NOx emissions. However, these processes have issues such as limited power output, high levels of unburned hydrocarbons, and carbon monoxide emissions. The aim of this paper is to present a methodology for optimizing combustion performance. The methodology consists of the use of a multi-objective genetic algorithm optimization tool; homogeneous charge compression ignition engine cases were studied with the ECFM-3Z combustion model. Results showed that injected fuel mass led to a decrease in power output, a finding which is in keeping with previous research. This paper presented on optimization tool which can be useful in improving the combustion process.

  3. Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model

    Science.gov (United States)

    Deng, Guang-Feng; Lin, Woo-Tsong

    This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.

  4. Two-phase strategy of controlling motor coordination determined by task performance optimality.

    Science.gov (United States)

    Shimansky, Yury P; Rand, Miya K

    2013-02-01

    A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model's utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.

  5. DETERMINATION OF OPTIMAL CONTOURS OF OPEN PIT MINE DURING OIL SHALE EXPLOITATION, BY MINEX 5.2.3. PROGRAM

    Directory of Open Access Journals (Sweden)

    Miroslav Ignjatović

    2013-04-01

    Full Text Available By examination and determination of optimal solution of technological processes of exploitation and oil shale processing from Aleksinac site and with adopted technical solution and exploitation of oil shale, derived a technical solution that optimize contour of the newly defined open pit mine. In the world, this problem is solved by using a computer program that has become the established standard for quick and efficient solution for this problem. One of the computer’s program, which can be used for determination of the optimal contours of open pit mines is Minex 5.2.3. program, produced in Australia in the Surpac Minex Group Pty Ltd Company, which is applied at the Mining and Metallurgy Institute Bor (no. of licenses are SSI - 24765 and SSI - 24766. In this study, authors performed 11 optimization of deposit geo - models in Minex 5.2.3. based on the tests results, performed in a laboratory for soil mechanics of Mining and Metallurgy Institute, Bor, on samples from the site of Aleksinac deposits.

  6. Multi objective optimization model for minimizing production cost and environmental impact in CNC turning process

    Science.gov (United States)

    Widhiarso, Wahyu; Rosyidi, Cucuk Nur

    2018-02-01

    Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.

  7. Optimization Model for Web Based Multimodal Interactive Simulations.

    Science.gov (United States)

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-07-15

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.

  8. In Search of Optimal Cognitive Diagnostic Model(s) for ESL Grammar Test Data

    Science.gov (United States)

    Yi, Yeon-Sook

    2017-01-01

    This study compares five cognitive diagnostic models in search of optimal one(s) for English as a Second Language grammar test data. Using a unified modeling framework that can represent specific models with proper constraints, the article first fit the full model (the log-linear cognitive diagnostic model, LCDM) and investigated which model…

  9. Electrodialytic desalination of brackish water: determination of optimal experimental parameters using full factorial design

    Science.gov (United States)

    Gmar, Soumaya; Helali, Nawel; Boubakri, Ali; Sayadi, Ilhem Ben Salah; Tlili, Mohamed; Amor, Mohamed Ben

    2017-12-01

    The aim of this work is to study the desalination of brackish water by electrodialysis (ED). A two level-three factor (23) full factorial design methodology was used to investigate the influence of different physicochemical parameters on the demineralization rate (DR) and the specific power consumption (SPC). Statistical design determines factors which have the important effects on ED performance and studies all interactions between the considered parameters. Three significant factors were used including applied potential, salt concentration and flow rate. The experimental results and statistical analysis show that applied potential and salt concentration are the main effect for DR as well as for SPC. The effect of interaction between applied potential and salt concentration was observed for SPC. A maximum value of 82.24% was obtained for DR under optimum conditions and the best value of SPC obtained was 5.64 Wh L-1. Empirical regression models were also obtained and used to predict the DR and the SPC profiles with satisfactory results. The process was applied for the treatment of real brackish water using the optimal parameters.

  10. PEM fuel cell model suitable for energy optimization purposes

    International Nuclear Information System (INIS)

    Caux, S.; Hankache, W.; Fadel, M.; Hissel, D.

    2010-01-01

    Many fuel cell stack models or fuel cell system models exist. A model must be built with a main objective, sometimes for accurate electro-chemical behavior description, sometimes for optimization procedure at a system level. In this paper, based on the fundamental reactions present in a fuel cell stack, an accurate model and identification procedure is presented for future energy management in a Hybrid Electrical Vehicle (HEV). The proposed approach extracts all important state variables in such a system and based on the control of the fuel cell's gas flows and temperature, simplification arises to a simple electrical model. Assumptions verified due to the control of the stack allow simplifying the relationships within keeping accuracy in the description of a global fuel cell stack behavior from current demand to voltage. Modeled voltage and current dynamic behaviors are compared with actual measurements. The obtained accuracy is sufficient and less time-consuming (versus other previously published system-oriented models) leading to a suitable model for optimization iterative off-line algorithms.

  11. PEM fuel cell model suitable for energy optimization purposes

    Energy Technology Data Exchange (ETDEWEB)

    Caux, S.; Hankache, W.; Fadel, M. [LAPLACE/CODIASE: UMR CNRS 5213, Universite de Toulouse - INPT, UPS, - ENSEEIHT: 2 rue Camichel BP7122, 31071 Toulouse (France); CNRS, LAPLACE, F-31071 Toulouse (France); Hissel, D. [FEMTO-ST ENISYS/FCLAB, UMR CNRS 6174, University of Franche-Comte, Rue Thierry Mieg, 90010 Belfort (France)

    2010-02-15

    Many fuel cell stack models or fuel cell system models exist. A model must be built with a main objective, sometimes for accurate electro-chemical behavior description, sometimes for optimization procedure at a system level. In this paper, based on the fundamental reactions present in a fuel cell stack, an accurate model and identification procedure is presented for future energy management in a Hybrid Electrical Vehicle (HEV). The proposed approach extracts all important state variables in such a system and based on the control of the fuel cell's gas flows and temperature, simplification arises to a simple electrical model. Assumptions verified due to the control of the stack allow simplifying the relationships within keeping accuracy in the description of a global fuel cell stack behavior from current demand to voltage. Modeled voltage and current dynamic behaviors are compared with actual measurements. The obtained accuracy is sufficient and less time-consuming (versus other previously published system-oriented models) leading to a suitable model for optimization iterative off-line algorithms. (author)

  12. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    Science.gov (United States)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input

  13. An internet graph model based on trade-off optimization

    Science.gov (United States)

    Alvarez-Hamelin, J. I.; Schabanel, N.

    2004-03-01

    This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in[CITE] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.

  14. Error propagation of partial least squares for parameters optimization in NIR modeling.

    Science.gov (United States)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-05

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.

  15. Error propagation of partial least squares for parameters optimization in NIR modeling

    Science.gov (United States)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-01

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.

  16. A New Method for Determining Optimal Regularization Parameter in Near-Field Acoustic Holography

    Directory of Open Access Journals (Sweden)

    Yue Xiao

    2018-01-01

    Full Text Available Tikhonov regularization method is effective in stabilizing reconstruction process of the near-field acoustic holography (NAH based on the equivalent source method (ESM, and the selection of the optimal regularization parameter is a key problem that determines the regularization effect. In this work, a new method for determining the optimal regularization parameter is proposed. The transfer matrix relating the source strengths of the equivalent sources to the measured pressures on the hologram surface is augmented by adding a fictitious point source with zero strength. The minimization of the norm of this fictitious point source strength is as the criterion for choosing the optimal regularization parameter since the reconstructed value should tend to zero. The original inverse problem in calculating the source strengths is converted into a univariate optimization problem which is solved by a one-dimensional search technique. Two numerical simulations with a point driven simply supported plate and a pulsating sphere are investigated to validate the performance of the proposed method by comparison with the L-curve method. The results demonstrate that the proposed method can determine the regularization parameter correctly and effectively for the reconstruction in NAH.

  17. Collaborative Emission Reduction Model Based on Multi-Objective Optimization for Greenhouse Gases and Air Pollutants.

    Science.gov (United States)

    Meng, Qing-chun; Rong, Xiao-xia; Zhang, Yi-min; Wan, Xiao-le; Liu, Yuan-yuan; Wang, Yu-zhi

    2016-01-01

    CO2 emission influences not only global climate change but also international economic and political situations. Thus, reducing the emission of CO2, a major greenhouse gas, has become a major issue in China and around the world as regards preserving the environmental ecology. Energy consumption from coal, oil, and natural gas is primarily responsible for the production of greenhouse gases and air pollutants such as SO2 and NOX, which are the main air pollutants in China. In this study, a mathematical multi-objective optimization method was adopted to analyze the collaborative emission reduction of three kinds of gases on the basis of their common restraints in different ways of energy consumption to develop an economic, clean, and efficient scheme for energy distribution. The first part introduces the background research, the collaborative emission reduction for three kinds of gases, the multi-objective optimization, the main mathematical modeling, and the optimization method. The second part discusses the four mathematical tools utilized in this study, which include the Granger causality test to analyze the causality between air quality and pollutant emission, a function analysis to determine the quantitative relation between energy consumption and pollutant emission, a multi-objective optimization to set up the collaborative optimization model that considers energy consumption, and an optimality condition analysis for the multi-objective optimization model to design the optimal-pole algorithm and obtain an efficient collaborative reduction scheme. In the empirical analysis, the data of pollutant emission and final consumption of energies of Tianjin in 1996-2012 was employed to verify the effectiveness of the model and analyze the efficient solution and the corresponding dominant set. In the last part, several suggestions for collaborative reduction are recommended and the drawn conclusions are stated.

  18. Experimental determination of optimal clamping torque for AB-PEM Fuel cell

    Directory of Open Access Journals (Sweden)

    Noor Ul Hassan

    2016-04-01

    Full Text Available Polymer electrolyte Membrane (PEM fuel cell is an electrochemical device producing electricity by the reaction of hydrogen and oxygen without combustion. PEM fuel cell stack is provided with an appropriate clamping torque to prevent leakage of reactant gases and to minimize the contact resistance between gas diffusion media (GDL and bipolar plates. GDL porous structure and gas permeability is directly affected by the compaction pressure which, consequently, drastically change the fuel cell performance. Various efforts were made to determine the optimal compaction pressure and pressure distributions through simulations and experimentation. Lower compaction pressure results in increase of contact resistance and also chances of leakage. On the other hand, higher compaction pressure decreases the contact resistance but also narrows down the diffusion path for mass transfer from gas channels to the catalyst layers, consequently, lowering cell performance. The optimal cell performance is related to the gasket thickness and compression pressure on GDL. Every stack has a unique assembly pressure due to differences in fuel cell components material and stack design. Therefore, there is still need to determine the optimal torque value for getting the optimal cell performance. This study has been carried out in continuation of deve­lopment of Air breathing PEM fuel cell for small Unmanned Aerial Vehicle (UAV application. Compaction pressure at minimum contact resistance was determined and clamping torque value was calcu­la­ted accordingly. Single cell performance tests were performed at five different clamping torque values i.e 0.5, 1.0, 1.5, 2.0 and 2.5 N m, for achieving optimal cell per­formance. Clamping pressure distribution tests were also performed at these torque values to verify uniform pressure distribution at optimal torque value. Experimental and theoretical results were compared for making inferences about optimal cell perfor­man­ce. A

  19. Fast optimization of statistical potentials for structurally constrained phylogenetic models

    Directory of Open Access Journals (Sweden)

    Rodrigue Nicolas

    2009-09-01

    Full Text Available Abstract Background Statistical approaches for protein design are relevant in the field of molecular evolutionary studies. In recent years, new, so-called structurally constrained (SC models of protein-coding sequence evolution have been proposed, which use statistical potentials to assess sequence-structure compatibility. In a previous work, we defined a statistical framework for optimizing knowledge-based potentials especially suited to SC models. Our method used the maximum likelihood principle and provided what we call the joint potentials. However, the method required numerical estimations by the use of computationally heavy Markov Chain Monte Carlo sampling algorithms. Results Here, we develop an alternative optimization procedure, based on a leave-one-out argument coupled to fast gradient descent algorithms. We assess that the leave-one-out potential yields very similar results to the joint approach developed previously, both in terms of the resulting potential parameters, and by Bayes factor evaluation in a phylogenetic context. On the other hand, the leave-one-out approach results in a considerable computational benefit (up to a 1,000 fold decrease in computational time for the optimization procedure. Conclusion Due to its computational speed, the optimization method we propose offers an attractive alternative for the design and empirical evaluation of alternative forms of potentials, using large data sets and high-dimensional parameterizations.

  20. Optimal Fluorescence Waveband Determination for Detecting Defective Cherry Tomatoes Using a Fluorescence Excitation-Emission Matrix

    Directory of Open Access Journals (Sweden)

    In-Suck Baek

    2014-11-01

    Full Text Available A multi-spectral fluorescence imaging technique was used to detect defective cherry tomatoes. The fluorescence excitation and emission matrix was used to measure for defects, sound surface and stem areas to determine the optimal fluorescence excitation and emission wavelengths for discrimination. Two-way ANOVA revealed the optimal excitation wavelength for detecting defect areas was 410 nm. Principal component analysis (PCA was applied to the fluorescence emission spectra of all regions at 410 nm excitation to determine the emission wavelengths for defect detection. The major emission wavelengths were 688 nm and 506 nm for the detection. Fluorescence images combined with the determined emission wavebands demonstrated the feasibility of detecting defective cherry tomatoes with >98% accuracy. Multi-spectral fluorescence imaging has potential utility in non-destructive quality sorting of cherry tomatoes.

  1. Coastal aquifer management under parameter uncertainty: Ensemble surrogate modeling based simulation-optimization

    Science.gov (United States)

    Janardhanan, S.; Datta, B.

    2011-12-01

    Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of

  2. Optimal Parameters to Determine the Apparent Diffusion Coefficient in Diffusion Weighted Imaging via Simulation

    Science.gov (United States)

    Perera, Dimuthu

    Diffusion weighted (DW) Imaging is a non-invasive MR technique that provides information about the tissue microstructure using the diffusion of water molecules. The diffusion is generally characterized by the apparent diffusion coefficient (ADC) parametric map. The purpose of this study is to investigate in silico how the calculation of ADC is affected by image SNR, b-values, and the true tissue ADC. Also, to provide optimal parameter combination depending on the percentage accuracy and precision for prostate peripheral region cancer application. Moreover, to suggest parameter choices for any type of tissue, while providing the expected accuracy and precision. In this research DW images were generated assuming a mono-exponential signal model at two different b-values and for known true ADC values. Rician noise of different levels was added to the DWI images to adjust the image SNR. Using the two DWI images, ADC was calculated using a mono-exponential model for each set of b-values, SNR, and true ADC. 40,000 ADC data were collected for each parameter setting to determine the mean and the standard-deviation of the calculated ADC, as well as the percentage accuracy and precision with respect to the true ADC. The accuracy was calculated using the difference between known and calculated ADC. The precision was calculated using the standard-deviation of calculated ADC. The optimal parameters for a specific study was determined when both the percentage accuracy and precision were minimized. In our study, we simulated two true ADCs (ADC 0.00102 for tumor and 0.00180 mm2/s for normal prostate peripheral region tissue). Image SNR was varied from 2 to 100 and b-values were varied from 0 to 2000s/mm2. The results show that the percentage accuracy and percentage precision were minimized with image SNR. To increase SNR, 10 signal-averagings (NEX) were used considering the limitation in total scan time. The optimal NEX combination for tumor and normal tissue for prostate

  3. A new adaptive hybrid electromagnetic damper: modelling, optimization, and experiment

    International Nuclear Information System (INIS)

    Asadi, Ehsan; Ribeiro, Roberto; Behrad Khamesee, Mir; Khajepour, Amir

    2015-01-01

    This paper presents the development of a new electromagnetic hybrid damper which provides regenerative adaptive damping force for various applications. Recently, the introduction of electromagnetic technologies to the damping systems has provided researchers with new opportunities for the realization of adaptive semi-active damping systems with the added benefit of energy recovery. In this research, a hybrid electromagnetic damper is proposed. The hybrid damper is configured to operate with viscous and electromagnetic subsystems. The viscous medium provides a bias and fail-safe damping force while the electromagnetic component adds adaptability and the capacity for regeneration to the hybrid design. The electromagnetic component is modeled and analyzed using analytical (lumped equivalent magnetic circuit) and electromagnetic finite element method (FEM) (COMSOL ® software package) approaches. By implementing both modeling approaches, an optimization for the geometric aspects of the electromagnetic subsystem is obtained. Based on the proposed electromagnetic hybrid damping concept and the preliminary optimization solution, a prototype is designed and fabricated. A good agreement is observed between the experimental and FEM results for the magnetic field distribution and electromagnetic damping forces. These results validate the accuracy of the modeling approach and the preliminary optimization solution. An analytical model is also presented for viscous damping force, and is compared with experimental results The results show that the damper is able to produce damping coefficients of 1300 and 0–238 N s m −1 through the viscous and electromagnetic components, respectively. (paper)

  4. RISK LOAN PORTFOLIO OPTIMIZATION MODEL BASED ON CVAR RISK MEASURE

    Directory of Open Access Journals (Sweden)

    Ming-Chang LEE

    2015-07-01

    Full Text Available In order to achieve commercial banks liquidity, safety and profitability objective requirements, loan portfolio risk analysis based optimization decisions are rational allocation of assets.  The risk analysis and asset allocation are the key technology of banking and risk management.  The aim of this paper, build a loan portfolio optimization model based on risk analysis.  Loan portfolio rate of return by using Value-at-Risk (VaR and Conditional Value-at-Risk (CVaR constraint optimization decision model reflects the bank's risk tolerance, and the potential loss of direct control of the bank.  In this paper, it analyze a general risk management model applied to portfolio problems with VaR and CVaR risk measures by using Using the Lagrangian Algorithm.  This paper solves the highly difficult problem by matrix operation method.  Therefore, the combination of this paper is easy understanding the portfolio problems with VaR and CVaR risk model is a hyperbola in mean-standard deviation space.  It is easy calculation in proposed method.

  5. Optimization Model for Machinery Selection of Multi-Crop Farms in Elsuki Agricultural Scheme

    Directory of Open Access Journals (Sweden)

    Mysara Ahmed Mohamed

    2017-07-01

    Full Text Available The optimization machinery model was developed to aid decision-makers and farm machinery managers in determining the optimal number of tractors, scheduling the agricultural operation and minimizing machinery total costs. For purpose of model verification, validation and application input data was collected from primary & secondary sources from Elsuki agricultural scheme for two seasons namely 2011-2012 and 2013-2014. Model verification was made by comparing the numbers of tractors of Elsuki agricultural scheme for season 2011-2012 with those estimated by the model. The model succeeded in reducing the number of tractors and operation total cost by 23%. The effect of optimization model on elements of direct cost saving indicated that the highest cost saving is reached with depreciation, repair and maintenance (23% and the minimum cost saving is attained with fuel cost (22%. Sensitivity analysis in terms of change in model input for each of cultivated area and total costs of operations showing that: Increasing the operation total cost by 10% decreased the total number of tractors after optimization by 23% and total cost of operations was also decreased by 23%. Increasing the cultivated area by 10%, decreased the total number of tractors after optimization by(12% and total cost of operations was also decreased by 12% (16669206 SDG(1111280 $ to 14636376 SDG(975758 $. For the case of multiple input effect of the area and operation total cost resulted in decrease maximum number of tractors by 12%, and the total cost of operations also decreased by 12%. It is recommended to apply the optimization model as pre-requisite for improving machinery management during implementation of machinery scheduling.

  6. Research on the decision-making model of land-use spatial optimization

    Science.gov (United States)

    He, Jianhua; Yu, Yan; Liu, Yanfang; Liang, Fei; Cai, Yuqiu

    2009-10-01

    Using the optimization result of landscape pattern and land use structure optimization as constraints of CA simulation results, a decision-making model of land use spatial optimization is established coupled the landscape pattern model with cellular automata to realize the land use quantitative and spatial optimization simultaneously. And Huangpi district is taken as a case study to verify the rationality of the model.

  7. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU

    Directory of Open Access Journals (Sweden)

    Yong Xia

    2015-01-01

    Full Text Available Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation and the other is the diffusion term of the monodomain model (partial differential equation. Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.

  8. An automatic and effective parameter optimization method for model tuning

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-11-01

    simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  9. Pumping Optimization Model for Pump and Treat Systems - 15091

    Energy Technology Data Exchange (ETDEWEB)

    Baker, S.; Ivarson, Kristine A.; Karanovic, M.; Miller, Charles W.; Tonkin, M.

    2015-01-15

    Pump and Treat systems are being utilized to remediate contaminated groundwater in the Hanford 100 Areas adjacent to the Columbia River in Eastern Washington. Design of the systems was supported by a three-dimensional (3D) fate and transport model. This model provided sophisticated simulation capabilities but requires many hours to calculate results for each simulation considered. Many simulations are required to optimize system performance, so a two-dimensional (2D) model was created to reduce run time. The 2D model was developed as a equivalent-property version of the 3D model that derives boundary conditions and aquifer properties from the 3D model. It produces predictions that are very close to the 3D model predictions, allowing it to be used for comparative remedy analyses. Any potential system modifications identified by using the 2D version are verified for use by running the 3D model to confirm performance. The 2D model was incorporated into a comprehensive analysis system (the Pumping Optimization Model, POM) to simplify analysis of multiple simulations. It allows rapid turnaround by utilizing a graphical user interface that: 1 allows operators to create hypothetical scenarios for system operation, 2 feeds the input to the 2D fate and transport model, and 3 displays the scenario results to evaluate performance improvement. All of the above is accomplished within the user interface. Complex analyses can be completed within a few hours and multiple simulations can be compared side-by-side. The POM utilizes standard office computing equipment and established groundwater modeling software.

  10. A Gas Scheduling Optimization Model for Steel Enterprises

    Directory of Open Access Journals (Sweden)

    Niu Honghai

    2017-01-01

    Full Text Available Regarding the scheduling problems of steel enterprises, this research designs the gas scheduling optimization model according to the rules and priorities. Considering different features and the process changes of the gas unit in the process of actual production, the calculation model of process state and gas consumption soft measurement together with the rules of scheduling optimization is proposed to provide the dispatchers with real-time gas using status of each process, then help them to timely schedule and reduce the gas volume fluctuations. In the meantime, operation forewarning and alarm functions are provided to avoid the abnormal situation in the scheduling, which has brought about very good application effect in the actual scheduling and ensures the safety of the gas pipe network system and the production stability.

  11. Linear Model for Optimal Distributed Generation Size Predication

    Directory of Open Access Journals (Sweden)

    Ahmed Al Ameri

    2017-01-01

    Full Text Available This article presents a linear model predicting optimal size of Distributed Generation (DG that addresses the minimum power loss. This method is based fundamentally on strong coupling between active power and voltage angle as well as between reactive power and voltage magnitudes. This paper proposes simplified method to calculate the total power losses in electrical grid for different distributed generation sizes and locations. The method has been implemented and tested on several IEEE bus test systems. The results show that the proposed method is capable of predicting approximate optimal size of DG when compared with precision calculations. The method that linearizes a complex model showed a good result, which can actually reduce processing time required. The acceptable accuracy with less time and memory required can help the grid operator to assess power system integrated within large-scale distribution generation.

  12. Learning with Admixture: Modeling, Optimization, and Applications in Population Genetics

    DEFF Research Database (Denmark)

    Cheng, Jade Yu

    2016-01-01

    the foundation for both CoalHMM and Ohana. Optimization modeling has been the main theme throughout my PhD, and it will continue to shape my work for the years to come. The algorithms and software I developed to study historical admixture and population evolution fall into a larger family of machine learning...... geneticists strive to establish working solutions to extract information from massive volumes of biological data. The steep increase in the quantity and quality of genomic data during the past decades provides a unique opportunity but also calls for new and improved algorithms and software to cope...... including population splits, effective population sizes, gene flow, etc. Since joining the CoalHMM development team in 2014, I have mainly contributed in two directions: 1) improving optimizations through heuristic-based evolutionary algorithms and 2) modeling of historical admixture events. Ohana, meaning...

  13. Sustainable logistics and transportation optimization models and algorithms

    CERN Document Server

    Gakis, Konstantinos; Pardalos, Panos

    2017-01-01

    Focused on the logistics and transportation operations within a supply chain, this book brings together the latest models, algorithms, and optimization possibilities. Logistics and transportation problems are examined within a sustainability perspective to offer a comprehensive assessment of environmental, social, ethical, and economic performance measures. Featured models, techniques, and algorithms may be used to construct policies on alternative transportation modes and technologies, green logistics, and incentives by the incorporation of environmental, economic, and social measures. Researchers, professionals, and graduate students in urban regional planning, logistics, transport systems, optimization, supply chain management, business administration, information science, mathematics, and industrial and systems engineering will find the real life and interdisciplinary issues presented in this book informative and useful.

  14. Autonomous guided vehicles methods and models for optimal path planning

    CERN Document Server

    Fazlollahtabar, Hamed

    2015-01-01

      This book provides readers with extensive information on path planning optimization for both single and multiple Autonomous Guided Vehicles (AGVs), and discusses practical issues involved in advanced industrial applications of AGVs. After discussing previously published research in the field and highlighting the current gaps, it introduces new models developed by the authors with the goal of reducing costs and increasing productivity and effectiveness in the manufacturing industry. The new models address the increasing complexity of manufacturing networks, due for example to the adoption of flexible manufacturing systems that involve automated material handling systems, robots, numerically controlled machine tools, and automated inspection stations, while also considering the uncertainty and stochastic nature of automated equipment such as AGVs. The book discusses and provides solutions to important issues concerning the use of AGVs in the manufacturing industry, including material flow optimization with A...

  15. The PDB_REDO server for macromolecular structure model optimization

    Directory of Open Access Journals (Sweden)

    Robbie P. Joosten

    2014-07-01

    Full Text Available The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB. The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011, Structure, 19, 1395–1412]. The PDB_REDO procedure aims for `constructive validation', aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallographers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB.

  16. Modeling marine surface microplastic transport to assess optimal removal locations

    OpenAIRE

    Sherman, Peter; Van Sebille, Erik

    2016-01-01

    Marine plastic pollution is an ever-increasing problem that demands immediate mitigation and reduction plans. Here, a model based on satellite-tracked buoy observations and scaled to a large data set of observations on microplastic from surface trawls was used to simulate the transport of plastics floating on the ocean surface from 2015 to 2025, with the goal to assess the optimal marine microplastic removal locations for two scenarios: removing the most surface microplastic and reducing the ...

  17. Geometry Based Design Automation : Applied to Aircraft Modelling and Optimization

    OpenAIRE

    Amadori, Kristian

    2012-01-01

    Product development processes are continuously challenged by demands for increased efficiency. As engineering products become more and more complex, efficient tools and methods for integrated and automated design are needed throughout the development process. Multidisciplinary Design Optimization (MDO) is one promising technique that has the potential to drastically improve concurrent design. MDO frameworks combine several disciplinary models with the aim of gaining a holistic perspective of ...

  18. Partner Selection Optimization Model of Agricultural Enterprises in Supply Chain

    OpenAIRE

    Feipeng Guo; Qibei Lu

    2013-01-01

    With more and more importance of correctly selecting partners in supply chain of agricultural enterprises, a large number of partner evaluation techniques are widely used in the field of agricultural science research. This study established a partner selection model to optimize the issue of agricultural supply chain partner selection. Firstly, it constructed a comprehensive evaluation index system after analyzing the real characteristics of agricultural supply chain. Secondly, a heuristic met...

  19. Layout optimization of DRAM cells using rigorous simulation model for NTD

    Science.gov (United States)

    Jeon, Jinhyuck; Kim, Shinyoung; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Kuechler, Bernd; Zimmermann, Rainer; Muelders, Thomas; Klostermann, Ulrich; Schmoeller, Thomas; Do, Mun-hoe; Choi, Jung-Hoe

    2014-03-01

    DRAM chip space is mainly determined by the size of the memory cell array patterns which consist of periodic memory cell features and edges of the periodic array. Resolution Enhancement Techniques (RET) are used to optimize the periodic pattern process performance. Computational Lithography such as source mask optimization (SMO) to find the optimal off axis illumination and optical proximity correction (OPC) combined with model based SRAF placement are applied to print patterns on target. For 20nm Memory Cell optimization we see challenges that demand additional tool competence for layout optimization. The first challenge is a memory core pattern of brick-wall type with a k1 of 0.28, so it allows only two spectral beams to interfere. We will show how to analytically derive the only valid geometrically limited source. Another consequence of two-beam interference limitation is a "super stable" core pattern, with the advantage of high depth of focus (DoF) but also low sensitivity to proximity corrections or changes of contact aspect ratio. This makes an array edge correction very difficult. The edge can be the most critical pattern since it forms the transition from the very stable regime of periodic patterns to non-periodic periphery, so it combines the most critical pitch and highest susceptibility to defocus. Above challenge makes the layout correction to a complex optimization task demanding a layout optimization that finds a solution with optimal process stability taking into account DoF, exposure dose latitude (EL), mask error enhancement factor (MEEF) and mask manufacturability constraints. This can only be achieved by simultaneously considering all criteria while placing and sizing SRAFs and main mask features. The second challenge is the use of a negative tone development (NTD) type resist, which has a strong resist effect and is difficult to characterize experimentally due to negative resist profile taper angles that perturb CD at bottom characterization by

  20. Optimizing multi-pinhole SPECT geometries using an analytical model

    International Nuclear Information System (INIS)

    Rentmeester, M C M; Have, F van der; Beekman, F J

    2007-01-01

    State-of-the-art multi-pinhole SPECT devices allow for sub-mm resolution imaging of radio-molecule distributions in small laboratory animals. The optimization of multi-pinhole and detector geometries using simulations based on ray-tracing or Monte Carlo algorithms is time-consuming, particularly because many system parameters need to be varied. As an efficient alternative we develop a continuous analytical model of a pinhole SPECT system with a stationary detector set-up, which we apply to focused imaging of a mouse. The model assumes that the multi-pinhole collimator and the detector both have the shape of a spherical layer, and uses analytical expressions for effective pinhole diameters, sensitivity and spatial resolution. For fixed fields-of-view, a pinhole-diameter adapting feedback loop allows for the comparison of the system resolution of different systems at equal system sensitivity, and vice versa. The model predicts that (i) for optimal resolution or sensitivity the collimator layer with pinholes should be placed as closely as possible around the animal given a fixed detector layer, (ii) with high-resolution detectors a resolution improvement up to 31% can be achieved compared to optimized systems, (iii) high-resolution detectors can be placed close to the collimator without significant resolution losses, (iv) interestingly, systems with a physical pinhole diameter of 0 mm can have an excellent resolution when high-resolution detectors are used

  1. Robust Optimization Model for Production Planning Problem under Uncertainty

    Directory of Open Access Journals (Sweden)

    Pembe GÜÇLÜ

    2017-01-01

    Full Text Available Conditions of businesses change very quickly. To take into account the uncertainty engendered by changes has become almost a rule while planning. Robust optimization techniques that are methods of handling uncertainty ensure to produce less sensitive results to changing conditions. Production planning, is to decide from which product, when and how much will be produced, with a most basic definition. Modeling and solution of the Production planning problems changes depending on structure of the production processes, parameters and variables. In this paper, it is aimed to generate and apply scenario based robust optimization model for capacitated two-stage multi-product production planning problem under parameter and demand uncertainty. With this purpose, production planning problem of a textile company that operate in İzmir has been modeled and solved, then deterministic scenarios’ and robust method’s results have been compared. Robust method has provided a production plan that has higher cost but, will result close to feasible and optimal for most of the different scenarios in the future.

  2. Influence of model errors in optimal sensor placement

    Science.gov (United States)

    Vincenzi, Loris; Simonini, Laura

    2017-02-01

    The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.

  3. Linear versus quadratic portfolio optimization model with transaction cost

    Science.gov (United States)

    Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah

    2014-06-01

    Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.

  4. Determining the optimal number of Kanban in multi-products supply chain system

    Science.gov (United States)

    Widyadana, G. A.; Wee, H. M.; Chang, Jer-Yuan

    2010-02-01

    Kanban, a key element of just-in-time system, is a re-order card or signboard giving instruction or triggering the pull system to manufacture or supply a component based on actual usage of material. There are two types of Kanban: production Kanban and withdrawal Kanban. This study uses optimal and meta-heuristic methods to determine the Kanban quantity and withdrawal lot sizes in a supply chain system. Although the mix integer programming method gives an optimal solution, it is not time efficient. For this reason, the meta-heuristic methods are suggested. In this study, a genetic algorithm (GA) and a hybrid of genetic algorithm and simulated annealing (GASA) are used. The study compares the performance of GA and GASA with that of the optimal method using MIP. The given problems show that both GA and GASA result in a near optimal solution, and they outdo the optimal method in term of run time. In addition, the GASA heuristic method gives a better performance than the GA heuristic method.

  5. A feasibility investigation for modeling and optimization of temperature in bone drilling using fuzzy logic and Taguchi optimization methodology.

    Science.gov (United States)

    Pandey, Rupesh Kumar; Panda, Sudhansu Sekhar

    2014-11-01

    Drilling of bone is a common procedure in orthopedic surgery to produce hole for screw insertion to fixate the fracture devices and implants. The increase in temperature during such a procedure increases the chances of thermal invasion of bone which can cause thermal osteonecrosis resulting in the increase of healing time or reduction in the stability and strength of the fixation. Therefore, drilling of bone with minimum temperature is a major challenge for orthopedic fracture treatment. This investigation discusses the use of fuzzy logic and Taguchi methodology for predicting and minimizing the temperature produced during bone drilling. The drilling experiments have been conducted on bovine bone using Taguchi's L25 experimental design. A fuzzy model is developed for predicting the temperature during orthopedic drilling as a function of the drilling process parameters (point angle, helix angle, feed rate and cutting speed). Optimum bone drilling process parameters for minimizing the temperature are determined using Taguchi method. The effect of individual cutting parameters on the temperature produced is evaluated using analysis of variance. The fuzzy model using triangular and trapezoidal membership predicts the temperature within a maximum error of ±7%. Taguchi analysis of the obtained results determined the optimal drilling conditions for minimizing the temperature as A3B5C1.The developed system will simplify the tedious task of modeling and determination of the optimal process parameters to minimize the bone drilling temperature. It will reduce the risk of thermal osteonecrosis and can be very effective for the online condition monitoring of the process. © IMechE 2014.

  6. Oyster Creek cycle 10 nodal model parameter optimization study using PSMS

    International Nuclear Information System (INIS)

    Dougher, J.D.

    1987-01-01

    The power shape monitoring system (PSMS) is an on-line core monitoring system that uses a three-dimensional nodal code (NODE-B) to perform nodal power calculations and compute thermal margins. The PSMS contains a parameter optimization function that improves the ability of NODE-B to accurately monitor core power distributions. This functions iterates on the model normalization parameters (albedos and mixing factors) to obtain the best agreement between predicted and measured traversing in-core probe (TIP) reading on a statepoint-by-statepoint basis. Following several statepoint optimization runs, an average set of optimized normalization parameters can be determined and can be implemented into the current or subsequent cycle core model for on-line core monitoring. A statistical analysis of 19 high-power steady-state state-points throughout Oyster Creek cycle 10 operation has shown a consistently poor virgin model performance. The normalization parameters used in the cycle 10 NODE-B model were based on a cycle 8 study, which evaluated only Exxon fuel types. The introduction of General Electric (GE) fuel into cycle 10 (172 assemblies) was a significant fuel/core design change that could have altered the optimum set of normalization parameters. Based on the need to evaluate a potential change in the model normalization parameters for cycle 11 and in an attempt to account for the poor cycle 10 model performance, a parameter optimization study was performed

  7. Application of numerical optimization techniques to control system design for nonlinear dynamic models of aircraft

    Science.gov (United States)

    Lan, C. Edward; Ge, Fuying

    1989-01-01

    Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.

  8. STOCHASTIC MODELING OF OPTIMIZED CREDIT STRATEGY OF A DISTRIBUTING COMPANY ON THE PHARMACEUTICAL MARKET

    Directory of Open Access Journals (Sweden)

    M. Boychuk

    2015-10-01

    Full Text Available The activity of distribution companies is multifaceted. Ihey establish contacts with producers and consumers, determine the range of prices of medicines, do promotions, hold stocks of pharmaceuticals and take risks in their further selling.Their internal problems are complicated by the political crisis in the country, decreased purchasing power of national currency, and the rise in interest rates on loans. Therefore the usage of stochastic models of dynamic systems for the research into optimizing the management of pharmaceutical products distribution companies taking into account credit payments is of great current interest. A stochastic model of the optimal credit strategy of a pharmaceutical distributor in the market of pharmaceutical products has been constructed in the article considering credit payments and income limitations. From the mathematical point of view the obtained problem is the one of stochastic optimal control where the amount of monetary credit is the control and the amount of pharmaceutical product is the solution curve. The model allows to identify the optimal cash loan and the corresponding optimal quantity of pharmaceutical product that comply with the differential model of the existing quantity of pharmaceutical products in the form of Ito; the condition of the existing initial stock of pharmaceutical products; the limitation on the amount of credit and profit received from the product selling and maximize the average integral income. The research of the stochastic optimal control problem involves the construction of the left process of crediting with determination of the shift point of that control, the choice of the right crediting process and the formation of the optimal credit process. It was found that the optimal control of the credit amount and the shift point of that control are the determined values and don’t depend on the coefficient in the Wiener process and the optimal trajectory of the amount of

  9. Determining optimal interconnection capacity on the basis of hourly demand and supply functions of electricity

    International Nuclear Information System (INIS)

    Keppler, Jan Horst; Meunier, William; Coquentin, Alexandre

    2017-01-01

    Interconnections for cross-border electricity flows are at the heart of the project to create a common European electricity market. At the time, increase in production from variable renewables clustered during a limited numbers of hours reduces the availability of existing transport infrastructures. This calls for higher levels of optimal interconnection capacity than in the past. In complement to existing scenario-building exercises such as the TYNDP that respond to the challenge of determining optimal levels of infrastructure provision, the present paper proposes a new empirically-based methodology to perform Cost-Benefit analysis for the determination of optimal interconnection capacity, using as an example the French-German cross-border trade. Using a very fine dataset of hourly supply and demand curves (aggregated auction curves) for the year 2014 from the EPEX Spot market, it constructs linearized net export (NEC) and net import demand curves (NIDC) for both countries. This allows assessing hour by hour the welfare impacts for incremental increases in interconnection capacity. Summing these welfare increases over the 8 760 hours of the year, this provides the annual total for each step increase of interconnection capacity. Confronting welfare benefits with the annual cost of augmenting interconnection capacity indicated the socially optimal increase in interconnection capacity between France and Germany on the basis of empirical market micro-data. (authors)

  10. Capital Cost Optimization for Prefabrication: A Factor Analysis Evaluation Model

    Directory of Open Access Journals (Sweden)

    Hong Xue

    2018-01-01

    Full Text Available High capital cost is a significant hindrance to the promotion of prefabrication. In order to optimize cost management and reduce capital cost, this study aims to explore the latent factors and factor analysis evaluation model. Semi-structured interviews were conducted to explore potential variables and then questionnaire survey was employed to collect professionals’ views on their effects. After data collection, exploratory factor analysis was adopted to explore the latent factors. Seven latent factors were identified, including “Management Index”, “Construction Dissipation Index”, “Productivity Index”, “Design Efficiency Index”, “Transport Dissipation Index”, “Material increment Index” and “Depreciation amortization Index”. With these latent factors, a factor analysis evaluation model (FAEM, divided into factor analysis model (FAM and comprehensive evaluation model (CEM, was established. The FAM was used to explore the effect of observed variables on the high capital cost of prefabrication, while the CEM was used to evaluate comprehensive cost management level on prefabrication projects. Case studies were conducted to verify the models. The results revealed that collaborative management had a positive effect on capital cost of prefabrication. Material increment costs and labor costs had significant impacts on production cost. This study demonstrated the potential of on-site management and standardization design to reduce capital cost. Hence, collaborative management is necessary for cost management of prefabrication. Innovation and detailed design were needed to improve cost performance. The new form of precast component factories can be explored to reduce transportation cost. Meanwhile, targeted strategies can be adopted for different prefabrication projects. The findings optimized the capital cost and improved the cost performance through providing an evaluation and optimization model, which helps managers to

  11. Comparison of operation optimization methods in energy system modelling

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2013-01-01

    In areas with large shares of Combined Heat and Power (CHP) production, significant introduction of intermittent renewable power production may lead to an increased number of operational constraints. As the operation pattern of each utility plant is determined by optimization of economics......, possibilities for decoupling production constraints may be valuable. Introduction of heat pumps in the district heating network may pose this ability. In order to evaluate if the introduction of heat pumps is economically viable, we develop calculation methods for the operation patterns of each of the used...... energy technologies. In the paper, three frequently used operation optimization methods are examined with respect to their impact on operation management of the combined technologies. One of the investigated approaches utilises linear programming for optimisation, one uses linear programming with binary...

  12. Optimizing Markovian modeling of chaotic systems with recurrent neural networks

    International Nuclear Information System (INIS)

    Cechin, Adelmo L.; Pechmann, Denise R.; Oliveira, Luiz P.L. de

    2008-01-01

    In this paper, we propose a methodology for optimizing the modeling of an one-dimensional chaotic time series with a Markov Chain. The model is extracted from a recurrent neural network trained for the attractor reconstructed from the data set. Each state of the obtained Markov Chain is a region of the reconstructed state space where the dynamics is approximated by a specific piecewise linear map, obtained from the network. The Markov Chain represents the dynamics of the time series in its statistical essence. An application to a time series resulted from Lorenz system is included

  13. Utilization-Based Modeling and Optimization for Cognitive Radio Networks

    Science.gov (United States)

    Liu, Yanbing; Huang, Jun; Liu, Zhangxiong

    The cognitive radio technique promises to manage and allocate the scarce radio spectrum in the highly varying and disparate modern environments. This paper considers a cognitive radio scenario composed of two queues for the primary (licensed) users and cognitive (unlicensed) users. According to the Markov process, the system state equations are derived and an optimization model for the system is proposed. Next, the system performance is evaluated by calculations which show the rationality of our system model. Furthermore, discussions among different parameters for the system are presented based on the experimental results.

  14. Optimization model of energy mix taking into account the environmental impact

    International Nuclear Information System (INIS)

    Gruenwald, O.; Oprea, D.

    2012-01-01

    At present, the energy system in the Czech Republic needs to decide some important issues regarding limited fossil resources, greater efficiency in producing of electrical energy and reducing emission levels of pollutants. These problems can be decided only by formulating and implementing an energy mix that will meet these conditions: rational, reliable, sustainable and competitive. The aim of this article is to find a new way of determining an optimal mix for the energy system in the Czech Republic. To achieve the aim, the linear optimization model comprising several economics, environmental and technical aspects will be applied. (Authors)

  15. Transfer prices assignment with integrated production and marketing optimization models

    Directory of Open Access Journals (Sweden)

    Enrique Parra

    2018-04-01

    Full Text Available Purpose: In decentralized organizations (today a great majority of the large multinational groups, much of the decision-making power is in its individual business units-BUs-. In these cases, the management control system (MCS uses transfer prices to coordinate actions of the BUs and to evaluate their performance with the goal of guaranteeing the whole corporation optimum. The purpose of the investigation is to design transfer prices that suit this goal. Design/methodology/approach: Considering the results of the whole company supply chain optimization models (in the presence of seasonality of demand the question is to design a mechanism that creates optimal incentives for the managers of each business unit to drive the corporation to the optimal performance. Mathematical programming models are used as a start point. Findings: Different transfer prices computation methods are introduced in this paper for decentralised organizations with two divisions (production and marketing. The methods take into account the results of the solution of the whole company supply chain optimization model, if exists, and can be adapted to the type of information available in the company. It is mainly focused on transport costs assignment. Practical implications: Using the methods proposed in this paper a decentralized corporation can implement more accurate transfer prices to drive the whole organization to the global optimum performance. Originality/value: The methods proposed are a new contribution to the literature on transfer prices with special emphasis on the practical and easy implementation in a modern corporation with several business units and with high seasonality of demand. Also, the methods proposed are very flexible and can be tuned depending on the type of information available in the company.

  16. Optimal Retail Price Model for Partial Consignment to Multiple Retailers

    Directory of Open Access Journals (Sweden)

    Po-Yu Chen

    2017-01-01

    Full Text Available This paper investigates the product pricing decision-making problem under a consignment stock policy in a two-level supply chain composed of one supplier and multiple retailers. The effects of the supplier’s wholesale prices and its partial inventory cost absorption of the retail prices of retailers with different market shares are investigated. In the partial product consignment model this paper proposes, the seller and the retailers each absorb part of the inventory costs. This model also provides general solutions for the complete product consignment and the traditional policy that adopts no product consignment. In other words, both the complete consignment and nonconsignment models are extensions of the proposed model (i.e., special cases. Research results indicated that the optimal retail price must be between 1/2 (50% and 2/3 (66.67% times the upper limit of the gross profit. This study also explored the results and influence of parameter variations on optimal retail price in the model.

  17. Web malware spread modelling and optimal control strategies

    Science.gov (United States)

    Liu, Wanping; Zhong, Shouming

    2017-02-01

    The popularity of the Web improves the growth of web threats. Formulating mathematical models for accurate prediction of malicious propagation over networks is of great importance. The aim of this paper is to understand the propagation mechanisms of web malware and the impact of human intervention on the spread of malicious hyperlinks. Considering the characteristics of web malware, a new differential epidemic model which extends the traditional SIR model by adding another delitescent compartment is proposed to address the spreading behavior of malicious links over networks. The spreading threshold of the model system is calculated, and the dynamics of the model is theoretically analyzed. Moreover, the optimal control theory is employed to study malware immunization strategies, aiming to keep the total economic loss of security investment and infection loss as low as possible. The existence and uniqueness of the results concerning the optimality system are confirmed. Finally, numerical simulations show that the spread of malware links can be controlled effectively with proper control strategy of specific parameter choice.

  18. Double-Bottom Chaotic Map Particle Swarm Optimization Based on Chi-Square Test to Determine Gene-Gene Interactions

    Science.gov (United States)

    Yang, Cheng-Hong; Chang, Hsueh-Wei

    2014-01-01

    Gene-gene interaction studies focus on the investigation of the association between the single nucleotide polymorphisms (SNPs) of genes for disease susceptibility. Statistical methods are widely used to search for a good model of gene-gene interaction for disease analysis, and the previously determined models have successfully explained the effects between SNPs and diseases. However, the huge numbers of potential combinations of SNP genotypes limit the use of statistical methods for analysing high-order interaction, and finding an available high-order model of gene-gene interaction remains a challenge. In this study, an improved particle swarm optimization with double-bottom chaotic maps (DBM-PSO) was applied to assist statistical methods in the analysis of associated variations to disease susceptibility. A big data set was simulated using the published genotype frequencies of 26 SNPs amongst eight genes for breast cancer. Results showed that the proposed DBM-PSO successfully determined two- to six-order models of gene-gene interaction for the risk association with breast cancer (odds ratio > 1.0; P value <0.05). Analysis results supported that the proposed DBM-PSO can identify good models and provide higher chi-square values than conventional PSO. This study indicates that DBM-PSO is a robust and precise algorithm for determination of gene-gene interaction models for breast cancer. PMID:24895547

  19. Determination of gallic acid with rhodanine by reverse flow injection analysis using simplex optimization.

    Science.gov (United States)

    Phakthong, Wilaiwan; Liawruangrath, Boonsom; Liawruangrath, Saisunee

    2014-12-01

    A reversed flow injection (rFI) system was designed and constructed for gallic acid determination. Gallic acid was determined based on the formation of chromogen between gallic acid and rhodanine, resulting in a colored product with a λmax at 520 nm. The optimum conditions for determining gallic acid were also investigated. Optimizations of the experimental conditions were carried out based on the so-call univariate method. The conditions obtained were 0.6% (w/v) rhodanine, 70% (v/v) ethanol, 0.9 mol L(-1) NaOH, 2.0 mL min(-1) flow rate, 75 μL injection loop and 600 cm mixing tubing length, respectively. Comparative optimizations of the experimental conditions were also carried out by multivariate or simplex optimization method. The conditions obtained were 1.2% (w/v) rhodanine, 70% (v/v) ethanol, 1.2 mol L(-1) NaOH, flow rate 2.5 mL min(-1), 75 μL injection loop and 600 cm mixing tubing length, respectively. It was found that the optimum conditions obtained by the former optimization method were mostly similar to those obtained by the latter method. The linear relationship between peak height and the concentration of gallic acid was obtained over the range of 0.1-35.0 mg L(-1) with the detection limit 0.081 mg L(-1). The relative standard deviations were found to be in the ranges 0.46-1.96% for 1, 10, 30 mg L(-1) of gallic acid (n=11). The method has the advantages of simplicity extremely high selectivity and high precision. The proposed method was successfully applied to the determination of gallic acid in longan samples without interferent effects from other common phenolic compounds that might be present in the longan samples collected in northern Thailand. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Optimization of experimental conditions in uranium trace determination using laser time-resolved fluorimetry

    International Nuclear Information System (INIS)

    Baly, L.; Garcia, M.A.

    1996-01-01

    At the present paper a new sample excitation geometry is presented for the uranium trace determination in aqueous solutions by the Time-Resolved Laser-Induced Fluorescence. This new design introduces the laser radiation through the top side of the cell allowing the use of cells with two quartz sides, less expensive than commonly used at this experimental set. Optimization of the excitation conditions, temporal discrimination and spectral selection are presented

  1. Using real options to determine optimal funding strategies for CO2 capture, transport and storage projects in the European Union

    International Nuclear Information System (INIS)

    Eckhause, Jeremy; Herold, Johannes

    2014-01-01

    Several projects in the European Union (EU) are currently under development to implement the carbon capture, transport and storage (CCS) technology on a large scale and may be subject to public funding under EU support initiatives. These CCS projects may develop any combination of three types of operating levels: pilot, demonstration and full-scale, representing progressing levels of electric power generation capability. Several projects have commenced at the demonstration level, with full-scale commercial levels planned for approximately 2020. Taking the perspective of a funding agency, we employ a real options framework for determining an optimal project selection and funding strategy for the development of full-scale CCS plants. Specifically, we formulate and solve a stochastic dynamic program (SDP) for obtaining optimal funding solutions in order to achieve at least one successfully operating full-scale CCS plant by a target year. The model demonstrates the improved risk reduction by employing such a multi-stage competition. We then extend the model to consider two sensitivities: (1) the flexibility to spend that budget among the time periods and (2) optimizing the budget, but specifying each time period's allocation a priori. State size and runtimes of the SDP model are provided. - Highlights: • Projects implementing three different CCS technology types are described. • We obtain projects’ transition probabilities and costs from expert interviews. • We use a multi-stage real options model to obtain optimal funding strategies. • Using this approach, actual decision-makers could reduce risks in CCS development

  2. Optimal replenishment policy for fuzzy inventory model with deteriorating items and allowable shortages under inflationary conditions

    Directory of Open Access Journals (Sweden)

    Jaggi Chandra K.

    2016-01-01

    Full Text Available This study develops an inventory model to determine ordering policy for deteriorating items with constant demand rate under inflationary condition over a fixed planning horizon. Shortages are allowed and are partially backlogged. In today’s wobbling economy, especially for long term investment, the effects of inflation cannot be disregarded as uncertainty about future inflation may influence the ordering policy. Therefore, in this paper a fuzzy model is developed that fuzzify the inflation rate, discount rate, deterioration rate, and backlogging parameter by using triangular fuzzy numbers to represent the uncertainty. For Defuzzification, the well known signed distance method is employed to find the total profit over the planning horizon. The objective of the study is to derive the optimal number of cycles and their optimal length so to maximize the net present value of the total profit over a fixed planning horizon. The necessary and sufficient conditions for an optimal solution are characterized. An algorithm is proposed to find the optimal solution. Finally, the proposed model has been validated with numerical example. Sensitivity analysis has been performed to study the impact of various parameters on the optimal solution, and some important managerial implications are presented.

  3. Pitowsky's Kolmogorovian Models and Super-determinism.

    Science.gov (United States)

    Kellner, Jakob

    2017-01-01

    In an attempt to demonstrate that local hidden variables are mathematically possible, Pitowsky constructed "spin-[Formula: see text] functions" and later "Kolmogorovian models", which employs a nonstandard notion of probability. We describe Pitowsky's analysis and argue (with the benefit of hindsight) that his notion of hidden variables is in fact just super-determinism (and accordingly physically not relevant). Pitowsky's first construction uses the Continuum Hypothesis. Farah and Magidor took this as an indication that at some stage physics might give arguments for or against adopting specific new axioms of set theory. We would rather argue that it supports the opposing view, i.e., the widespread intuition "if you need a non-measurable function, it is physically irrelevant".

  4. Distributionally Robust Return-Risk Optimization Models and Their Applications

    Directory of Open Access Journals (Sweden)

    Li Yang

    2014-01-01

    Full Text Available Based on the risk control of conditional value-at-risk, distributionally robust return-risk optimization models with box constraints of random vector are proposed. They describe uncertainty in both the distribution form and moments (mean and covariance matrix of random vector. It is difficult to solve them directly. Using the conic duality theory and the minimax theorem, the models are reformulated as semidefinite programming problems, which can be solved by interior point algorithms in polynomial time. An important theoretical basis is therefore provided for applications of the models. Moreover, an application of the models to a practical example of portfolio selection is considered, and the example is evaluated using a historical data set of four stocks. Numerical results show that proposed methods are robust and the investment strategy is safe.

  5. Optimization of arterial age prediction models based in pulse wave

    Energy Technology Data Exchange (ETDEWEB)

    Scandurra, A G [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Meschino, G J [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Passoni, L I [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Dai Pra, A L [Engineering Aplied Artificial Intelligence Group, Mathematics Department, Mar del Plata University (Argentina); Introzzi, A R [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Clara, F M [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina)

    2007-11-15

    We propose the detection of early arterial ageing through a prediction model of arterial age based in the coherence assumption between the pulse wave morphology and the patient's chronological age. Whereas we evaluate several methods, a Sugeno fuzzy inference system is selected. Models optimization is approached using hybrid methods: parameter adaptation with Artificial Neural Networks and Genetic Algorithms. Features selection was performed according with their projection on main factors of the Principal Components Analysis. The model performance was tested using the bootstrap error type .632E. The model presented an error smaller than 8.5%. This result encourages including this process as a diagnosis module into the device for pulse analysis that has been developed by the Bioengineering Laboratory staff.

  6. Routing and Scheduling Optimization Model of Sea Transportation

    Science.gov (United States)

    barus, Mika debora br; asyrafy, Habib; nababan, Esther; mawengkang, Herman

    2018-01-01

    This paper examines the routing and scheduling optimization model of sea transportation. One of the issues discussed is about the transportation of ships carrying crude oil (tankers) which is distributed to many islands. The consideration is the cost of transportation which consists of travel costs and the cost of layover at the port. Crude oil to be distributed consists of several types. This paper develops routing and scheduling model taking into consideration some objective functions and constraints. The formulation of the mathematical model analyzed is to minimize costs based on the total distance visited by the tanker and minimize the cost of the ports. In order for the model of the problem to be more realistic and the cost calculated to be more appropriate then added a parameter that states the multiplier factor of cost increases as the charge of crude oil is filled.

  7. Optimization of arterial age prediction models based in pulse wave

    International Nuclear Information System (INIS)

    Scandurra, A G; Meschino, G J; Passoni, L I; Dai Pra, A L; Introzzi, A R; Clara, F M

    2007-01-01

    We propose the detection of early arterial ageing through a prediction model of arterial age based in the coherence assumption between the pulse wave morphology and the patient's chronological age. Whereas we evaluate several methods, a Sugeno fuzzy inference system is selected. Models optimization is approached using hybrid methods: parameter adaptation with Artificial Neural Networks and Genetic Algorithms. Features selection was performed according with their projection on main factors of the Principal Components Analysis. The model performance was tested using the bootstrap error type .632E. The model presented an error smaller than 8.5%. This result encourages including this process as a diagnosis module into the device for pulse analysis that has been developed by the Bioengineering Laboratory staff

  8. Simulation platform to model, optimize and design wind turbines

    Energy Technology Data Exchange (ETDEWEB)

    Iov, F.; Hansen, A.D.; Soerensen, P.; Blaabjerg, F.

    2004-03-01

    This report is a general overview of the results obtained in the project 'Electrical Design and Control. Simulation Platform to Model, Optimize and Design Wind Turbines'. The motivation for this research project is the ever-increasing wind energy penetration into the power network. Therefore, the project has the main goal to create a model database in different simulation tools for a system optimization of the wind turbine systems. Using this model database a simultaneous optimization of the aerodynamic, mechanical, electrical and control systems over the whole range of wind speeds and grid characteristics can be achieved. The report is structured in six chapters. First, the background of this project and the main goals as well as the structure of the simulation platform is given. The main topologies for wind turbines, which have been taken into account during the project, are briefly presented. Then, the considered simulation tools namely: HAWC, DIgSILENT, Saber and Matlab/Simulink have been used in this simulation platform are described. The focus here is on the modelling and simulation time scale aspects. The abilities of these tools are complementary and they can together cover all the modelling aspects of the wind turbines e.g. mechanical loads, power quality, switching, control and grid faults. However, other simulation packages e.g PSCAD/EMTDC can easily be added in the simulation platform. New models and new control algorithms for wind turbine systems have been developed and tested in these tools. All these models are collected in dedicated libraries in Matlab/Simulink as well as in Saber. Some simulation results from the considered tools are presented for MW wind turbines. These simulation results focuses on fixed-speed and variable speed/pitch wind turbines. A good agreement with the real behaviour of these systems is obtained for each simulation tool. These models can easily be extended to model different kinds of wind turbines or large wind

  9. Determination of optimal pollution levels through multiple-criteria decision making: an application to the Spanish electricity sector

    International Nuclear Information System (INIS)

    Linares, P.

    1999-01-01

    An efficient pollution management requires the harmonisation of often conflicting economic and environmental aspects. A compromise has to be found, in which social welfare is maximised. The determination of this social optimum has been attempted with different tools, of which the most correct according to neo-classical economics may be the one based on the economic valuation of the externalities of pollution. However, this approach is still controversial, and few decision makers trust the results obtained enough to apply them. But a very powerful alternative exists, which avoids the problem of monetizing physical impacts. Multiple-criteria decision making provides methodologies for dealing with impacts in different units, and for incorporating the preferences of decision makers or society as a whole, thus allowing for the determination of social optima under heterogeneous criteria, which is usually the case of pollution management decisions. In this paper, a compromise programming model is presented for the determination of the optimal pollution levels for the electricity industry in Spain for carbon dioxide, sulphur dioxide, nitrous oxides, and radioactive waste. The preferences of several sectors of society are incorporated explicitly into the model, so that the solution obtained represents the optimal pollution level from a social point of view. Results show that cost minimisation is still the main objective for society, but the simultaneous consideration of the rest of the criteria achieves large pollution reductions at a low cost increment. (Author)

  10. Application of an Evolutionary Algorithm for Parameter Optimization in a Gully Erosion Model

    Energy Technology Data Exchange (ETDEWEB)

    Rengers, Francis; Lunacek, Monte; Tucker, Gregory

    2016-06-01

    Herein we demonstrate how to use model optimization to determine a set of best-fit parameters for a landform model simulating gully incision and headcut retreat. To achieve this result we employed the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an iterative process in which samples are created based on a distribution of parameter values that evolve over time to better fit an objective function. CMA-ES efficiently finds optimal parameters, even with high-dimensional objective functions that are non-convex, multimodal, and non-separable. We ran model instances in parallel on a high-performance cluster, and from hundreds of model runs we obtained the best parameter choices. This method is far superior to brute-force search algorithms, and has great potential for many applications in earth science modeling. We found that parameters representing boundary conditions tended to converge toward an optimal single value, whereas parameters controlling geomorphic processes are defined by a range of optimal values.

  11. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  12. Optimal dividends in the Brownian motion risk model with interest

    Science.gov (United States)

    Fang, Ying; Wu, Rong

    2009-07-01

    In this paper, we consider a Brownian motion risk model, and in addition, the surplus earns investment income at a constant force of interest. The objective is to find a dividend policy so as to maximize the expected discounted value of dividend payments. It is well known that optimality is achieved by using a barrier strategy for unrestricted dividend rate. However, ultimate ruin of the company is certain if a barrier strategy is applied. In many circumstances this is not desirable. This consideration leads us to impose a restriction on the dividend stream. We assume that dividends are paid to the shareholders according to admissible strategies whose dividend rate is bounded by a constant. Under this additional constraint, we show that the optimal dividend strategy is formed by a threshold strategy.

  13. Using Optimization Models for Scheduling in Enterprise Resource Planning Systems

    Directory of Open Access Journals (Sweden)

    Frank Herrmann

    2016-03-01

    Full Text Available Companies often use specially-designed production systems and change them from time to time. They produce small batches in order to satisfy specific demands with the least tardiness. This imposes high demands on high-performance scheduling algorithms which can be rapidly adapted to changes in the production system. As a solution, this paper proposes a generic approach: solutions were obtained using a widely-used commercially-available tool for solving linear optimization models, which is available in an Enterprise Resource Planning System (in the SAP system for example or can be connected to it. In a real-world application of a flow shop with special restrictions this approach is successfully used on a standard personal computer. Thus, the main implication is that optimal scheduling with a commercially-available tool, incorporated in an Enterprise Resource Planning System, may be the best approach.

  14. Vehicle Propulsion Systems Introduction to Modeling and Optimization

    CERN Document Server

    Guzzella, Lino

    2013-01-01

    This text provides an introduction to the mathematical modeling and subsequent optimization of vehicle propulsion systems and their supervisory control algorithms. Automobiles are responsible for a substantial part of the world's consumption of primary energy, mostly fossil liquid hydrocarbons and the reduction of the fuel consumption of these vehicles has become a top priority. Increasing concerns over fossil fuel consumption and the associated environmental impacts have motivated many groups in industry and academia to propose new propulsion systems and to explore new optimization methodologies. This third edition has been prepared to include many of these developments. In the third edition, exercises are included at the end of each chapter and the solutions are available on the web.

  15. Power Consumption in Refrigeration Systems - Modeling for Optimization

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F. S.; Skovrup, Morten Juel

    2011-01-01

    Refrigeration systems consume a substantial amount of energy. Taking for instance supermarket refrigeration systems as an example they can account for up to 50−80% of the total energy consumption in the supermarket. Due to the thermal capacity made up by the refrigerated goods in the system...... there is a possibility for optimizing the power consumption by utilizing load shifting strategies. This paper describes the dynamics and the modeling of a vapor compression refrigeration system needed for sufficiently realistic estimation of the power consumption and its minimization. This leads to a non-convex function...... with possibly multiple extrema. Such a function can not directly be optimized by standard methods and a qualitative analysis of the system’s constraints is presented. The description of power consumption contains nonlinear terms which are approximated by linear functions in the control variables and the error...

  16. Dynamics of underactuated multibody systems modeling, control and optimal design

    CERN Document Server

    Seifried, Robert

    2014-01-01

    Underactuated multibody systems are intriguing mechatronic systems, as they possess fewer control inputs than degrees of freedom. Some examples are modern light-weight flexible robots and articulated manipulators with passive joints. This book investigates such underactuated multibody systems from an integrated perspective. This includes all major steps from the modeling of rigid and flexible multibody systems, through nonlinear control theory, to optimal system design. The underlying theories and techniques from these different fields are presented using a self-contained and unified approach and notation system. Subsequently, the book focuses on applications to large multibody systems with multiple degrees of freedom, which require a combination of symbolical and numerical procedures. Finally, an integrated, optimization-based design procedure is proposed, whereby both structural and control design are considered concurrently. Each chapter is supplemented by illustrated examples.

  17. Optimal Control of Drug Therapy in a Hepatitis B Model

    Directory of Open Access Journals (Sweden)

    Jonathan E. Forde

    2016-08-01

    Full Text Available Combination antiviral drug therapy improves the survival rates of patients chronically infected with hepatitis B virus by controlling viral replication and enhancing immune responses. Some of these drugs have side effects that make them unsuitable for long-term administration. To address the trade-off between the positive and negative effects of the combination therapy, we investigated an optimal control problem for a delay differential equation model of immune responses to hepatitis virus B infection. Our optimal control problem investigates the interplay between virological and immunomodulatory effects of therapy, the control of viremia and the administration of the minimal dosage over a short period of time. Our numerical results show that the high drug levels that induce immune modulation rather than suppression of virological factors are essential for the clearance of hepatitis B virus.

  18. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  19. A new enhanced index tracking model in portfolio optimization with sum weighted approach

    Science.gov (United States)

    Siew, Lam Weng; Jaaman, Saiful Hafizah; Hoe, Lam Weng

    2017-04-01

    Index tracking is a portfolio management which aims to construct the optimal portfolio to achieve similar return with the benchmark index return at minimum tracking error without purchasing all the stocks that make up the index. Enhanced index tracking is an improved portfolio management which aims to generate higher portfolio return than the benchmark index return besides minimizing the tracking error. The objective of this paper is to propose a new enhanced index tracking model with sum weighted approach to improve the existing index tracking model for tracking the benchmark Technology Index in Malaysia. The optimal portfolio composition and performance of both models are determined and compared in terms of portfolio mean return, tracking error and information ratio. The results of this study show that the optimal portfolio of the proposed model is able to generate higher mean return than the benchmark index at minimum tracking error. Besides that, the proposed model is able to outperform the existing model in tracking the benchmark index. The significance of this study is to propose a new enhanced index tracking model with sum weighted apporach which contributes 67% improvement on the portfolio mean return as compared to the existing model.

  20. Proficient brain for optimal performance: the MAP model perspective.

    Science.gov (United States)

    Bertollo, Maurizio; di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio

    2016-01-01

    Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the "neural efficiency hypothesis." We also observed more ERD as related to optimal-controlled performance in conditions of "neural adaptability" and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques.

  1. Proficient brain for optimal performance: the MAP model perspective

    Directory of Open Access Journals (Sweden)

    Maurizio Bertollo

    2016-05-01

    Full Text Available Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1 and optimal-controlled (Type 2 performances. Methods. Ten elite shooters (6 male and 4 female with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the “neural efficiency hypothesis.” We also observed more ERD as related to optimal-controlled performance in conditions of “neural adaptability” and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques.

  2. Maintenance modeling and optimization integrating human and material resources

    International Nuclear Information System (INIS)

    Martorell, S.; Villamizar, M.; Carlos, S.; Sanchez, A.

    2010-01-01

    Maintenance planning is a subject of concern to many industrial sectors as plant safety and business depend on it. Traditionally, the maintenance planning is formulated in terms of a multi-objective optimization (MOP) problem where reliability, availability, maintainability and cost (RAM+C) act as decision criteria and maintenance strategies (i.e. maintenance tasks intervals) act as the only decision variables. However the appropriate development of each maintenance strategy depends not only on the maintenance intervals but also on the resources (human and material) available to implement such strategies. Thus, the effect of the necessary resources on RAM+C needs to be modeled and accounted for in formulating the MOP affecting the set of objectives and constraints. In this paper RAM+C models to explicitly address the effect of human resources and material resources (spare parts) on RAM+C criteria are proposed. This extended model allows accounting for explicitly how the above decision criteria depends on the basic model parameters representing the type of strategies, maintenance intervals, durations, human resources and material resources. Finally, an application case is performed to optimize the maintenance plan of a motor-driven pump equipment considering as decision variables maintenance and test intervals and human and material resources.

  3. Multiobjective Optimization Modeling Approach for Multipurpose Single Reservoir Operation

    Directory of Open Access Journals (Sweden)

    Iosvany Recio Villa

    2018-04-01

    Full Text Available The water resources planning and management discipline recognizes the importance of a reservoir’s carryover storage. However, mathematical models for reservoir operation that include carryover storage are scarce. This paper presents a novel multiobjective optimization modeling framework that uses the constraint-ε method and genetic algorithms as optimization techniques for the operation of multipurpose simple reservoirs, including carryover storage. The carryover storage was conceived by modifying Kritsky and Menkel’s method for reservoir design at the operational stage. The main objective function minimizes the cost of the total annual water shortage for irrigation areas connected to a reservoir, while the secondary one maximizes its energy production. The model includes operational constraints for the reservoir, Kritsky and Menkel’s method, irrigation areas, and the hydropower plant. The study is applied to Carlos Manuel de Céspedes reservoir, establishing a 12-month planning horizon and an annual reliability of 75%. The results highly demonstrate the applicability of the model, obtaining monthly releases from the reservoir that include the carryover storage, degree of reservoir inflow regulation, water shortages in irrigation areas, and the energy generated by the hydroelectric plant. The main product is an operational graph that includes zones as well as rule and guide curves, which are used as triggers for long-term reservoir operation.

  4. Optimization of atmospheric transport models on HPC platforms

    Science.gov (United States)

    de la Cruz, Raúl; Folch, Arnau; Farré, Pau; Cabezas, Javier; Navarro, Nacho; Cela, José María

    2016-12-01

    The performance and scalability of atmospheric transport models on high performance computing environments is often far from optimal for multiple reasons including, for example, sequential input and output, synchronous communications, work unbalance, memory access latency or lack of task overlapping. We investigate how different software optimizations and porting to non general-purpose hardware architectures improve code scalability and execution times considering, as an example, the FALL3D volcanic ash transport model. To this purpose, we implement the FALL3D model equations in the WARIS framework, a software designed from scratch to solve in a parallel and efficient way different geoscience problems on a wide variety of architectures. In addition, we consider further improvements in WARIS such as hybrid MPI-OMP parallelization, spatial blocking, auto-tuning and thread affinity. Considering all these aspects together, the FALL3D execution times for a realistic test case running on general-purpose cluster architectures (Intel Sandy Bridge) decrease by a factor between 7 and 40 depending on the grid resolution. Finally, we port the application to Intel Xeon Phi (MIC) and NVIDIA GPUs (CUDA) accelerator-based architectures and compare performance, cost and power consumption on all the architectures. Implications on time-constrained operational model configurations are discussed.

  5. Models for optimizing the conveying process; Modelle in der Foerderprozessoptimierung

    Energy Technology Data Exchange (ETDEWEB)

    Koehler, U. [Vattenfall Europe Mining AG, Cottbus (Germany)

    2007-05-15

    Load- and time controlled use of excavator-conveyor-spreader equipment combinations in the overburden operation is of essential importance for achieving economic cost structures in opencast lignite mines. These effects result from optimizations based on realistic models. Vattenfall Europe Mining AG has successfully implemented a constant linkage of information from the geological model to the direct GPS-based operational management. With the help of this large-scale system model it was possible for the first time to operate two modernized bucket wheel excavators simultaneously with a spreader adjusted to performance limits. At the same time, quality requirements of overburden dumping were fulfilled. Special importance is attached to an uninterrupted, continuous mode of operation at the real, current capacity limit in the systems characteristic field. The Article explains the initial situation and the state-of-the-art technology for the model design as basis for the optimization of linked excavation, conveying and dumping systems. Furthermore, potential considerations from reports presented on the occasion of the Colloquium for Innovative Lignite Mining (KIB) and possible steps for the further technological development are outlined. (orig.)

  6. Maintenance modeling and optimization integrating human and material resources

    Energy Technology Data Exchange (ETDEWEB)

    Martorell, S., E-mail: smartore@iqn.upv.e [Dpto. Ingenieria Quimica y Nuclear, Universidad Politecnica Valencia (Spain); Villamizar, M.; Carlos, S. [Dpto. Ingenieria Quimica y Nuclear, Universidad Politecnica Valencia (Spain); Sanchez, A. [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia (Spain)

    2010-12-15

    Maintenance planning is a subject of concern to many industrial sectors as plant safety and business depend on it. Traditionally, the maintenance planning is formulated in terms of a multi-objective optimization (MOP) problem where reliability, availability, maintainability and cost (RAM+C) act as decision criteria and maintenance strategies (i.e. maintenance tasks intervals) act as the only decision variables. However the appropriate development of each maintenance strategy depends not only on the maintenance intervals but also on the resources (human and material) available to implement such strategies. Thus, the effect of the necessary resources on RAM+C needs to be modeled and accounted for in formulating the MOP affecting the set of objectives and constraints. In this paper RAM+C models to explicitly address the effect of human resources and material resources (spare parts) on RAM+C criteria are proposed. This extended model allows accounting for explicitly how the above decision criteria depends on the basic model parameters representing the type of strategies, maintenance intervals, durations, human resources and material resources. Finally, an application case is performed to optimize the maintenance plan of a motor-driven pump equipment considering as decision variables maintenance and test intervals and human and material resources.

  7. MENENTUKAN PORTOFOLIO OPTIMAL MENGGUNAKAN MODEL CONDITIONAL MEAN VARIANCE

    Directory of Open Access Journals (Sweden)

    I GEDE ERY NISCAHYANA

    2016-08-01

    Full Text Available When the returns of stock prices show the existence of autocorrelation and heteroscedasticity, then conditional mean variance models are suitable method to model the behavior of the stocks. In this thesis, the implementation of the conditional mean variance model to the autocorrelated and heteroscedastic return was discussed. The aim of this thesis was to assess the effect of the autocorrelated and heteroscedastic returns to the optimal solution of a portfolio. The margin of four stocks, Fortune Mate Indonesia Tbk (FMII.JK, Bank Permata Tbk (BNLI.JK, Suryamas Dutamakmur Tbk (SMDM.JK dan Semen Gresik Indonesia Tbk (SMGR.JK were estimated by GARCH(1,1 model with standard innovations following the standard normal distribution and the t-distribution.  The estimations were used to construct a portfolio. The portfolio optimal was found when the standard innovation used was t-distribution with the standard deviation of 1.4532 and the mean of 0.8023 consisting of 0.9429 (94% of FMII stock, 0.0473 (5% of  BNLI stock, 0% of SMDM stock, 1% of  SMGR stock.

  8. Optimization of Regional Geodynamic Models for Mantle Dynamics

    Science.gov (United States)

    Knepley, M.; Isaac, T.; Jadamec, M. A.

    2016-12-01

    The SubductionGenerator program is used to construct high resolution, 3D regional thermal structures for mantle convection simulations using a variety of data sources, including sea floor ages and geographically referenced 3D slab locations based on seismic observations. The initial bulk temperature field is constructed using a half-space cooling model or plate cooling model, and related smoothing functions based on a diffusion length-scale analysis. In this work, we seek to improve the 3D thermal model and test different model geometries and dynamically driven flow fields using constraints from observed seismic velocities and plate motions. Through a formal adjoint analysis, we construct the primal-dual version of the multi-objective PDE-constrained optimization problem for the plate motions and seismic misfit. We have efficient, scalable preconditioners for both the forward and adjoint problems based upon a block preconditioning strategy, and a simple gradient update is used to improve the control residual. The full optimal control problem is formulated on a nested hierarchy of grids, allowing a nonlinear multigrid method to accelerate the solution.

  9. Validation, Optimization and Simulation of a Solar Thermoelectric Generator Model

    Science.gov (United States)

    Madkhali, Hadi Ali; Hamil, Ali; Lee, HoSung

    2017-12-01

    This study explores thermoelectrics as a viable option for small-scale solar thermal applications. Thermoelectric technology is based on the Seebeck effect, which states that a voltage is induced when a temperature gradient is applied to the junctions of two differing materials. This research proposes to analyze, validate, simulate, and optimize a prototype solar thermoelectric generator (STEG) model in order to increase efficiency. The intent is to further develop STEGs as a viable and productive energy source that limits pollution and reduces the cost of energy production. An empirical study (Kraemer et al. in Nat Mater 10:532, 2011) on the solar thermoelectric generator reported a high efficiency performance of 4.6%. The system had a vacuum glass enclosure, a flat panel (absorber), thermoelectric generator and water circulation for the cold side. The theoretical and numerical approach of this current study validated the experimental results from Kraemer's study to a high degree. The numerical simulation process utilizes a two-stage approach in ANSYS software for Fluent and Thermal-Electric Systems. The solar load model technique uses solar radiation under AM 1.5G conditions in Fluent. This analytical model applies Dr. Ho Sung Lee's theory of optimal design to improve the performance of the STEG system by using dimensionless parameters. Applying this theory, using two cover glasses and radiation shields, the STEG model can achieve a highest efficiency of 7%.

  10. Model independent spin determination at hadron colliders

    International Nuclear Information System (INIS)

    Edelhaeuser, Lisa

    2012-01-01

    By the end of the year 2011, both the CMS and ATLAS experiments at the Large Hadron Collider have recorded around 5 inverse femtobarns of data at an energy of 7 TeV. There are only vague hints from the already analysed data towards new physics at the TeV scale. However, one knows that around this scale, new physics should show up so that theoretical issues of the standard model of particle physics can be cured. During the last decades, extensions to the standard model that are supposed to solve its problems have been constructed, and the corresponding phenomenology has been worked out. As soon as new physics is discovered, one has to deal with the problem of determining the nature of the underlying model. A first hint is of course given by the mass spectrum and quantum numbers such as electric and colour charges of the new particles. However, there are two popular model classes, supersymmetric models and extradimensional models, which can exhibit almost equal properties at the accessible energy range. Both introduce partners to the standard model particles with the same charges and thus one needs an extended discrimination method. From the origin of these partners arises a relevant difference: The partners constructed in extradimensional models have the same spin as their standard model partners while in Supersymmetry they differ by spin 1/2. These different spins have an impact on the phenomenology of the two models. For example, one can exploit the fact that the total cross sections are affected, but this requires a very good knowledge of the couplings and masses involved. Another approach uses angular distributions depending on the particle spins. A prevailing method based on this idea uses the invariant mass distribution of the visible particles in decay chains. One can relate these distributions to the spin of the particle mediating the decay since it reflects itself in the highest power of the invariant mass s ff of the adjacent particles. In this thesis we

  11. Model independent spin determination at hadron colliders

    Energy Technology Data Exchange (ETDEWEB)

    Edelhaeuser, Lisa

    2012-04-25

    By the end of the year 2011, both the CMS and ATLAS experiments at the Large Hadron Collider have recorded around 5 inverse femtobarns of data at an energy of 7 TeV. There are only vague hints from the already analysed data towards new physics at the TeV scale. However, one knows that around this scale, new physics should show up so that theoretical issues of the standard model of particle physics can be cured. During the last decades, extensions to the standard model that are supposed to solve its problems have been constructed, and the corresponding phenomenology has been worked out. As soon as new physics is discovered, one has to deal with the problem of determining the nature of the underlying model. A first hint is of course given by the mass spectrum and quantum numbers such as electric and colour charges of the new particles. However, there are two popular model classes, supersymmetric models and extradimensional models, which can exhibit almost equal properties at the accessible energy range. Both introduce partners to the standard model particles with the same charges and thus one needs an extended discrimination method. From the origin of these partners arises a relevant difference: The partners constructed in extradimensional models have the same spin as their standard model partners while in Supersymmetry they differ by spin 1/2. These different spins have an impact on the phenomenology of the two models. For example, one can exploit the fact that the total cross sections are affected, but this requires a very good knowledge of the couplings and masses involved. Another approach uses angular distributions depending on the particle spins. A prevailing method based on this idea uses the invariant mass distribution of the visible particles in decay chains. One can relate these distributions to the spin of the particle mediating the decay since it reflects itself in the highest power of the invariant mass s{sub ff} of the adjacent particles. In this thesis

  12. Model independent spin determination at hadron colliders

    Energy Technology Data Exchange (ETDEWEB)

    Edelhaeuser, Lisa

    2012-04-25

    By the end of the year 2011, both the CMS and ATLAS experiments at the Large Hadron Collider have recorded around 5 inverse femtobarns of data at an energy of 7 TeV. There are only vague hints from the already analysed data towards new physics at the TeV scale. However, one knows that around this scale, new physics should show up so that theoretical issues of the standard model of particle physics can be cured. During the last decades, extensions to the standard model that are supposed to solve its problems have been constructed, and the corresponding phenomenology has been worked out. As soon as new physics is discovered, one has to deal with the problem of determining the nature of the underlying model. A first hint is of course given by the mass spectrum and quantum numbers such as electric and colour charges of the new particles. However, there are two popular model classes, supersymmetric models and extradimensional models, which can exhibit almost equal properties at the accessible energy range. Both introduce partners to the standard model particles with the same charges and thus one needs an extended discrimination method. From the origin of these partners arises a relevant difference: The partners constructed in extradimensional models have the same spin as their standard model partners while in Supersymmetry they differ by spin 1/2. These different spins have an impact on the phenomenology of the two models. For example, one can exploit the fact that the total cross sections are affected, but this requires a very good knowledge of the couplings and masses involved. Another approach uses angular distributions depending on the particle spins. A prevailing method based on this idea uses the invariant mass distribution of the visible particles in decay chains. One can relate these distributions to the spin of the particle mediating the decay since it reflects itself in the highest power of the invariant mass s{sub ff} of the adjacent particles. In this thesis

  13. Multiple responses optimization in the development of a headspace gas chromatography method for the determination of residual solvents in pharmaceuticals

    Directory of Open Access Journals (Sweden)

    Carla M. Teglia

    2015-10-01

    Full Text Available An efficient generic static headspace gas chromatography (HSGC method was developed, optimized and validated for the routine determination of several residual solvents (RS in drug substance, using a strategy with two sets of calibration. Dimethylsulfoxide (DMSO was selected as the sample diluent and internal standards were used to minimize signal variations due to the preparative step. A gas chromatograph from Agilent Model 6890 equipped with flame ionization detector (FID and a DB-624 (30 m×0.53 mm i.d., 3.00 µm film thickness column was used. The inlet split ratio was 5:1. The influencing factors in the chromatographic separation of the analytes were determined through a fractional factorial experimental design. Significant variables: the initial temperature (IT, the final temperature (FT of the oven and the carrier gas flow rate (F were optimized using a central composite design. Response transformation and desirability function were applied to find out the optimal combination of the chromatographic variables to achieve an adequate resolution of the analytes and short analysis time. These conditions were 30 °C for IT, 158 °C for FT and 1.90 mL/min for F. The method was proven to be accurate, linear in a wide range and very sensitive for the analyzed solvents through a comprehensive validation according to the ICH guidelines. Keywords: Headspace gas chromatography, Residual solvents, Pharmaceuticals, Surface response methodology, Desirability function

  14. Numerical modeling and optimization of machining duplex stainless steels

    Directory of Open Access Journals (Sweden)

    Rastee D. Koyee

    2015-01-01

    Full Text Available The shortcomings of the machining analytical and empirical models in combination with the industry demands have to be fulfilled. A three-dimensional finite element modeling (FEM introduces an attractive alternative to bridge the gap between pure empirical and fundamental scientific quantities, and fulfill the industry needs. However, the challenging aspects which hinder the successful adoption of FEM in the machining sector of manufacturing industry have to be solved first. One of the greatest challenges is the identification of the correct set of machining simulation input parameters. This study presents a new methodology to inversely calculate the input parameters when simulating the machining of standard duplex EN 1.4462 and super duplex EN 1.4410 stainless steels. JMatPro software is first used to model elastic–viscoplastic and physical work material behavior. In order to effectively obtain an optimum set of inversely identified friction coefficients, thermal contact conductance, Cockcroft–Latham critical damage value, percentage reduction in flow stress, and Taylor–Quinney coefficient, Taguchi-VIKOR coupled with Firefly Algorithm Neural Network System is applied. The optimization procedure effectively minimizes the overall differences between the experimentally measured performances such as cutting forces, tool nose temperature and chip thickness, and the numerically obtained ones at any specified cutting condition. The optimum set of input parameter is verified and used for the next step of 3D-FEM application. In the next stage of the study, design of experiments, numerical simulations, and fuzzy rule modeling approaches are employed to optimize types of chip breaker, insert shapes, process conditions, cutting parameters, and tool orientation angles based on many important performances. Through this study, not only a new methodology in defining the optimal set of controllable parameters for turning simulations is introduced, but also

  15. WE-D-BRE-04: Modeling Optimal Concurrent Chemotherapy Schedules

    International Nuclear Information System (INIS)

    Jeong, J; Deasy, J O

    2014-01-01

    Purpose: Concurrent chemo-radiation therapy (CCRT) has become a more common cancer treatment option with a better tumor control rate for several tumor sites, including head and neck and lung cancer. In this work, possible optimal chemotherapy schedules were investigated by implementing chemotherapy cell-kill into a tumor response model of RT. Methods: The chemotherapy effect has been added into a published model (Jeong et al., PMB (2013) 58:4897), in which the tumor response to RT can be simulated with the effects of hypoxia and proliferation. Based on the two-compartment pharmacokinetic model, the temporal concentration of chemotherapy agent was estimated. Log cell-kill was assumed and the cell-kill constant was estimated from the observed increase in local control due to concurrent chemotherapy. For a simplified two cycle CCRT regime, several different starting times and intervals were simulated with conventional RT regime (2Gy/fx, 5fx/wk). The effectiveness of CCRT was evaluated in terms of reduction in radiation dose required for 50% of control to find the optimal chemotherapy schedule. Results: Assuming the typical slope of dose response curve (γ50=2), the observed 10% increase in local control rate was evaluated to be equivalent to an extra RT dose of about 4 Gy, from which the cell-kill rate of chemotherapy was derived to be about 0.35. Best response was obtained when chemotherapy was started at about 3 weeks after RT began. As the interval between two cycles decreases, the efficacy of chemotherapy increases with broader range of optimal starting times. Conclusion: The effect of chemotherapy has been implemented into the resource-conservation tumor response model to investigate CCRT. The results suggest that the concurrent chemotherapy might be more effective when delayed for about 3 weeks, due to lower tumor burden and a larger fraction of proliferating cells after reoxygenation

  16. Determination of optimal geometry for cylindrical sources for gamma radiation measurements; Odredjivanje optimalne geometrije za mjerenje gama zracenja cilindrichnih izvora

    Energy Technology Data Exchange (ETDEWEB)

    Sinjeri, Lj; Kulisic, P [Elektra - Zagreb, Zagreb (Yugoslavia)

    1990-07-01

    Low radioactive sources were used for experimental determination of optimal dimensions for cylindrical source using coaxial Ge(Li) detector. Then, calculational procedure is used to find optimal dimensions of cylindrical source. The results from calculational procedure confirm with experimental results. In such way the verification of calculational procedure is done and it can be used for determination of optimal geometry for low radioactive cylindrical sources. (author)

  17. Determining optimal selling price and lot size with process reliability and partial backlogging considerations

    Science.gov (United States)

    Hsieh, Tsu-Pang; Cheng, Mei-Chuan; Dye, Chung-Yuan; Ouyang, Liang-Yuh

    2011-01-01

    In this article, we extend the classical economic production quantity (EPQ) model by proposing imperfect production processes and quality-dependent unit production cost. The demand rate is described by any convex decreasing function of the selling price. In addition, we allow for shortages and a time-proportional backlogging rate. For any given selling price, we first prove that the optimal production schedule not only exists but also is unique. Next, we show that the total profit per unit time is a concave function of price when the production schedule is given. We then provide a simple algorithm to find the optimal selling price and production schedule for the proposed model. Finally, we use a couple of numerical examples to illustrate the algorithm and conclude this article with suggestions for possible future research.

  18. Subsurface water parameters: optimization approach to their determination from remotely sensed water color data.

    Science.gov (United States)

    Jain, S C; Miller, J R

    1976-04-01

    A method, using an optimization scheme, has been developed for the interpretation of spectral albedo (or spectral reflectance) curves obtained from remotely sensed water color data. This method used a two-flow model of the radiation flow and solves for the albedo. Optimization fitting of predicted to observed reflectance data is performed by a quadratic interpolation method for the variables chlorophyll concentration and scattering coefficient. The technique is applied to airborne water color data obtained from Kawartha Lakes, Sargasso Sea, and Nova Scotia coast. The modeled spectral albedo curves are compared to those obtained experimentally, and the computed optimum water parameters are compared to ground truth values. It is shown that the backscattered spectral signal contains information that can be interpreted to give quantitative estimates of the chlorophyll concentration and turbidity in the waters studied.

  19. Optimization model using Markowitz model approach for reducing the number of dengue cases in Bandung

    Science.gov (United States)

    Yong, Benny; Chin, Liem

    2017-05-01

    Dengue fever is one of the most serious diseases and this disease can cause death. Currently, Indonesia is a country with the highest cases of dengue disease in Southeast Asia. Bandung is one of the cities in Indonesia that is vulnerable to dengue disease. The sub-districts in Bandung had different levels of relative risk of dengue disease. Dengue disease is transmitted to people by the bite of an Aedesaegypti mosquito that is infected with a dengue virus. Prevention of dengue disease is by controlling the vector mosquito. It can be done by various methods, one of the methods is fogging. The efforts made by the Health Department of Bandung through fogging had constraints in terms of limited funds. This problem causes Health Department selective in fogging, which is only done for certain locations. As a result, many sub-districts are not handled properly by the Health Department because of the unequal distribution of activities to prevent the spread of dengue disease. Thus, it needs the proper allocation of funds to each sub-district in Bandung for preventing dengue transmission optimally. In this research, the optimization model using Markowitz model approach will be applied to determine the allocation of funds should be given to each sub-district in Bandung. Some constraints will be added to this model and the numerical solution will be solved with generalized reduced gradient method using Solver software. The expected result of this research is the proportion of funds given to each sub-district in Bandung correspond to the level of risk of dengue disease in each sub-district in Bandung so that the number of dengue cases in this city can be reduced significantly.

  20. Research on NC laser combined cutting optimization model of sheet metal parts

    Science.gov (United States)

    Wu, Z. Y.; Zhang, Y. L.; Li, L.; Wu, L. H.; Liu, N. B.

    2017-09-01

    The optimization problem for NC laser combined cutting of sheet metal parts was taken as the research object in this paper. The problem included two contents: combined packing optimization and combined cutting path optimization. In the problem of combined packing optimization, the method of “genetic algorithm + gravity center NFP + geometric transformation” was used to optimize the packing of sheet metal parts. In the problem of combined cutting path optimization, the mathematical model of cutting path optimization was established based on the parts cutting constraint rules of internal contour priority and cross cutting. The model played an important role in the optimization calculation of NC laser combined cutting.