WorldWideScience

Sample records for previous optimization steps

  1. The availability of the step optimization in Monaco planning system

    International Nuclear Information System (INIS)

    Kim, Dae Sup

    2014-01-01

    We present a method to reduce this gap and complete the treatment plan, to be made by the re-optimization is performed in the same conditions as the initial treatment plan different from Monaco treatment planning system. The optimization is carried in two steps when performing the inverse calculation for volumetric modulated radiation therapy or intensity modulated radiation therapy in Monaco treatment planning system. This study was the first plan with a complete optimization in two steps by performing all of the treatment plan, without changing the optimized condition from Step 1 to Step 2, a typical sequential optimization performed. At this time, the experiment was carried out with a pencil beam and Monte Carlo algorithm is applied In step 2. We compared initial plan and re-optimized plan with the same optimized conditions. And then evaluated the planning dose by measurement. When performing a re-optimization for the initial treatment plan, the second plan applied the step optimization. When the common optimization again carried out in the same conditions in the initial treatment plan was completed, the result is not the same. From a comparison of the treatment planning system, similar to the dose-volume the histogram showed a similar trend, but exhibit different values that do not satisfy the conditions best optimized dose, dose homogeneity and dose limits. Also showed more than 20% different in comparison dosimetry. If different dose algorithms, this measure is not the same out. The process of performing a number of trial and error, and you get to the ultimate goal of treatment planning optimization process. If carried out to optimize the completion of the initial trust only the treatment plan, we could be made of another treatment plan. The similar treatment plan could not satisfy to optimization results. When you perform re-optimization process, you will need to apply the step optimized conditions, making sure the dose distribution through the optimization

  2. Optimizing the number of steps in learning tasks for complex skills.

    Science.gov (United States)

    Nadolski, Rob J; Kirschner, Paul A; van Merriënboer, Jeroen J G

    2005-06-01

    Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimized for efficient and effective learning. The aim of the study is to investigate the relation between the number of steps provided to learners and the quality of their learning of complex skills. It is hypothesized that students receiving an optimized number of steps will learn better than those receiving either the whole task in only one step or those receiving a large number of steps. Participants were 35 sophomore law students studying at Dutch universities, mean age=22.8 years (SD=3.5), 63% were female. Participants were randomly assigned to 1 of 3 computer-delivered versions of a multimedia programme on how to prepare and carry out a law plea. The versions differed only in the number of learning steps provided. Videotaped plea-performance results were determined, various related learning measures were acquired and all computer actions were logged and analyzed. Participants exposed to an intermediate (i.e. optimized) number of steps outperformed all others on the compulsory learning task. No differences in performance on a transfer task were found. A high number of steps proved to be less efficient for carrying out the learning task. An intermediate number of steps is the most effective, proving that the number of steps can be optimized for improving learning.

  3. Optimal Design and Analysis of the Stepped Core for Wireless Power Transfer Systems

    Directory of Open Access Journals (Sweden)

    Xiu Zhang

    2016-01-01

    Full Text Available The key of wireless power transfer technology rests on finding the most suitable means to improve the efficiency of the system. The wireless power transfer system applied in implantable medical devices can reduce the patients’ physical and economic burden because it will achieve charging in vitro. For a deep brain stimulator, in this paper, the transmitter coil is designed and optimized. According to the previous research results, the coils with ferrite core can improve the performance of the wireless power transfer system. Compared with the normal ferrite core, the stepped core can produce more uniform magnetic flux density. In this paper, the finite element method (FEM is used to analyze the system. The simulation results indicate that the core loss generated in the optimal stepped ferrite core can reduce about 10% compared with the normal ferrite core, and the efficiency of the wireless power transfer system can be increased significantly.

  4. Direct aperture optimization: A turnkey solution for step-and-shoot IMRT

    International Nuclear Information System (INIS)

    Shepard, D.M.; Earl, M.A.; Li, X.A.; Naqvi, S.; Yu, C.

    2002-01-01

    IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach 'direct aperture optimization'. This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT

  5. Prediction of Optimal Daily Step Count Achievement from Segmented School Physical Activity

    Directory of Open Access Journals (Sweden)

    Ryan D. Burns

    2015-01-01

    Full Text Available Optimizing physical activity in childhood is needed for prevention of disease and for healthy social and psychological development. There is limited research examining how segmented school physical activity patterns relate to a child achieving optimal physical activity levels. The purpose of this study was to examine the predictive relationship between step counts during specific school segments and achieving optimal school (6,000 steps/day and daily (12,000 steps/day step counts in children. Participants included 1,714 school-aged children (mean age = 9.7±1.0 years recruited across six elementary schools. Physical activity was monitored for one week using pedometers. Generalized linear mixed effects models were used to determine the adjusted odds ratios (ORs of achieving both school and daily step count standards for every 1,000 steps taken during each school segment. The school segment that related in strongest way to a student achieving 6,000 steps during school hours was afternoon recess (OR = 40.03; P<0.001 and for achieving 12,000 steps for the entire day was lunch recess (OR = 5.03; P<0.001. School segments including lunch and afternoon recess play an important role for optimizing daily physical activity in children.

  6. Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

    NARCIS (Netherlands)

    Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.

    2009-01-01

    We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step,

  7. A chaos wolf optimization algorithm with self-adaptive variable step-size

    Science.gov (United States)

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  8. Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

    OpenAIRE

    Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.

    2009-01-01

    We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step, which targets the ?+-center of the next pair of perturbed problems. As for the centering steps, we apply a sharper quadratic convergence result, which leads to a slightly wider neighborhood for th...

  9. SU-F-J-66: Anatomy Deformation Based Comparison Between One-Step and Two-Step Optimization for Online ART

    International Nuclear Information System (INIS)

    Feng, Z; Yu, G; Qin, S; Li, D; Ma, C; Zhu, J; Yin, Y

    2016-01-01

    Purpose: This study investigated that how the quality of adapted plan was affected by inter-fractional anatomy deformation by using one-step and two-step optimization for on line adaptive radiotherapy (ART) procedure. Methods: 10 lung carcinoma patients were chosen randomly to produce IMRT plan by one-step and two-step algorithms respectively, and the prescribed dose was set as 60 Gy on the planning target volume (PTV) for all patients. To simulate inter-fractional target deformation, four specific cases were created by systematic anatomy variation; including target superior shift 0.5 cm, 0.3cm contraction, 0.3 cm expansion and 45-degree rotation. Based on these four anatomy deformation, adapted plan, regenerated plan and non-adapted plan were created to evaluate quality of adaptation. Adapted plans were generated automatically by using one-step and two-step algorithms respectively to optimize original plans, and regenerated plans were manually created by experience physicists. Non-adapted plans were produced by recalculating the dose distribution based on corresponding original plans. The deviations among these three plans were statistically analyzed by paired T-test. Results: In PTV superior shift case, adapted plans had significantly better PTV coverage by using two-step algorithm compared with one-step one, and meanwhile there was a significant difference of V95 by comparison with adapted and non-adapted plans (p=0.0025). In target contraction deformation, with almost same PTV coverage, the total lung received lower dose using one-step algorithm than two-step algorithm (p=0.0143,0.0126 for V20, Dmean respectively). In other two deformation cases, there were no significant differences observed by both two optimized algorithms. Conclusion: In geometry deformation such as target contraction, with comparable PTV coverage, one-step algorithm gave better OAR sparing than two-step algorithm. Reversely, the adaptation by using two-step algorithm had higher efficiency

  10. SU-F-J-66: Anatomy Deformation Based Comparison Between One-Step and Two-Step Optimization for Online ART

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Z; Yu, G; Qin, S; Li, D [Shandong Normal University, Jinan, Shandong (China); Ma, C; Zhu, J; Yin, Y [Shandong Cancer Hospital and Institute, Jinan, Shandong (China)

    2016-06-15

    Purpose: This study investigated that how the quality of adapted plan was affected by inter-fractional anatomy deformation by using one-step and two-step optimization for on line adaptive radiotherapy (ART) procedure. Methods: 10 lung carcinoma patients were chosen randomly to produce IMRT plan by one-step and two-step algorithms respectively, and the prescribed dose was set as 60 Gy on the planning target volume (PTV) for all patients. To simulate inter-fractional target deformation, four specific cases were created by systematic anatomy variation; including target superior shift 0.5 cm, 0.3cm contraction, 0.3 cm expansion and 45-degree rotation. Based on these four anatomy deformation, adapted plan, regenerated plan and non-adapted plan were created to evaluate quality of adaptation. Adapted plans were generated automatically by using one-step and two-step algorithms respectively to optimize original plans, and regenerated plans were manually created by experience physicists. Non-adapted plans were produced by recalculating the dose distribution based on corresponding original plans. The deviations among these three plans were statistically analyzed by paired T-test. Results: In PTV superior shift case, adapted plans had significantly better PTV coverage by using two-step algorithm compared with one-step one, and meanwhile there was a significant difference of V95 by comparison with adapted and non-adapted plans (p=0.0025). In target contraction deformation, with almost same PTV coverage, the total lung received lower dose using one-step algorithm than two-step algorithm (p=0.0143,0.0126 for V20, Dmean respectively). In other two deformation cases, there were no significant differences observed by both two optimized algorithms. Conclusion: In geometry deformation such as target contraction, with comparable PTV coverage, one-step algorithm gave better OAR sparing than two-step algorithm. Reversely, the adaptation by using two-step algorithm had higher efficiency

  11. A chaos wolf optimization algorithm with self-adaptive variable step-size

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2017-10-01

    Full Text Available To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as “winner-take-all” and the update mechanism as “survival of the fittest” were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  12. Algorithm of axial fuel optimization based in progressive steps of turned search

    International Nuclear Information System (INIS)

    Martin del Campo, C.; Francois, J.L.

    2003-01-01

    The development of an algorithm for the axial optimization of fuel of boiling water reactors (BWR) is presented. The algorithm is based in a serial optimizations process in the one that the best solution in each stage is the starting point of the following stage. The objective function of each stage adapts to orient the search toward better values of one or two parameters leaving the rest like restrictions. Conform to it advances in those optimization stages, it is increased the fineness of the evaluation of the investigated designs. The algorithm is based on three stages, in the first one are used Genetic algorithms and in the two following Tabu Search. The objective function of the first stage it looks for to minimize the average enrichment of the one it assembles and to fulfill with the generation of specified energy for the operation cycle besides not violating none of the limits of the design base. In the following stages the objective function looks for to minimize the power factor peak (PPF) and to maximize the margin of shutdown (SDM), having as restrictions the one average enrichment obtained for the best design in the first stage and those other restrictions. The third stage, very similar to the previous one, it begins with the design of the previous stage but it carries out a search of the margin of shutdown to different exhibition steps with calculations in three dimensions (3D). An application to the case of the design of the fresh assemble for the fourth fuel reload of the Unit 1 reactor of the Laguna Verde power plant (U1-CLV) is presented. The obtained results show an advance in the handling of optimization methods and in the construction of the objective functions that should be used for the different design stages of the fuel assemblies. (Author)

  13. Step-by-step optimization and global chaos of nonlinear parameters in exact calculations of few-particle systems

    International Nuclear Information System (INIS)

    Frolov, A.M.

    1986-01-01

    Exact variational calculations are treated for few-particle systems in the exponential basis of relative coordinates using nonlinear parameters. The methods of step-by-step optimization and global chaos of nonlinear parameters are applied to calculate the S and P states of ppμ, ddμ, ttμ homonuclear mesomolecules within the error ≤±0.001 eV. The global chaos method turned out to be well applicable to nuclear 3 H and 3 He systems

  14. Step-by-step optimization and global chaos of nonlinear parameters in exact calculations of few-particle systems

    Energy Technology Data Exchange (ETDEWEB)

    Frolov, A M

    1986-09-01

    Exact variational calculations are treated for few-particle systems in the exponential basis of relative coordinates using nonlinear parameters. The methods of step-by-step optimization and global chaos of nonlinear parameters are applied to calculate the S and P states of pp..mu.., dd..mu.., tt..mu.. homonuclear mesomolecules within the error less than or equal to+-0.001 eV. The global chaos method turned out to be well applicable to nuclear /sup 3/H and /sup 3/He systems.

  15. Predicting United States Medical Licensure Examination Step 2 clinical knowledge scores from previous academic indicators

    Directory of Open Access Journals (Sweden)

    Monteiro KA

    2017-06-01

    Full Text Available Kristina A Monteiro, Paul George, Richard Dollase, Luba Dumenco Office of Medical Education, The Warren Alpert Medical School of Brown University, Providence, RI, USA Abstract: The use of multiple academic indicators to identify students at risk of experiencing difficulty completing licensure requirements provides an opportunity to increase support services prior to high-stakes licensure examinations, including the United States Medical Licensure Examination (USMLE Step 2 clinical knowledge (CK. Step 2 CK is becoming increasingly important in decision-making by residency directors because of increasing undergraduate medical enrollment and limited available residency vacancies. We created and validated a regression equation to predict students’ Step 2 CK scores from previous academic indicators to identify students at risk, with sufficient time to intervene with additional support services as necessary. Data from three cohorts of students (N=218 with preclinical mean course exam score, National Board of Medical Examination subject examinations, and USMLE Step 1 and Step 2 CK between 2011 and 2013 were used in analyses. The authors created models capable of predicting Step 2 CK scores from academic indicators to identify at-risk students. In model 1, preclinical mean course exam score and Step 1 score accounted for 56% of the variance in Step 2 CK score. The second series of models included mean preclinical course exam score, Step 1 score, and scores on three NBME subject exams, and accounted for 67%–69% of the variance in Step 2 CK score. The authors validated the findings on the most recent cohort of graduating students (N=89 and predicted Step 2 CK score within a mean of four points (SD=8. The authors suggest using the first model as a needs assessment to gauge the level of future support required after completion of preclinical course requirements, and rescreening after three of six clerkships to identify students who might benefit from

  16. Bayesian emulation for optimization in multi-step portfolio decisions

    OpenAIRE

    Irie, Kaoru; West, Mike

    2016-01-01

    We discuss the Bayesian emulation approach to computational solution of multi-step portfolio studies in financial time series. "Bayesian emulation for decisions" involves mapping the technical structure of a decision analysis problem to that of Bayesian inference in a purely synthetic "emulating" statistical model. This provides access to standard posterior analytic, simulation and optimization methods that yield indirect solutions of the decision problem. We develop this in time series portf...

  17. Technical Note: A novel leaf sequencing optimization algorithm which considers previous underdose and overdose events for MLC tracking radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Wisotzky, Eric, E-mail: eric.wisotzky@charite.de, E-mail: eric.wisotzky@ipk.fraunhofer.de; O’Brien, Ricky; Keall, Paul J., E-mail: paul.keall@sydney.edu.au [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, Sydney, NSW 2006 (Australia)

    2016-01-15

    Purpose: Multileaf collimator (MLC) tracking radiotherapy is complex as the beam pattern needs to be modified due to the planned intensity modulation as well as the real-time target motion. The target motion cannot be planned; therefore, the modified beam pattern differs from the original plan and the MLC sequence needs to be recomputed online. Current MLC tracking algorithms use a greedy heuristic in that they optimize for a given time, but ignore past errors. To overcome this problem, the authors have developed and improved an algorithm that minimizes large underdose and overdose regions. Additionally, previous underdose and overdose events are taken into account to avoid regions with high quantity of dose events. Methods: The authors improved the existing MLC motion control algorithm by introducing a cumulative underdose/overdose map. This map represents the actual projection of the planned tumor shape and logs occurring dose events at each specific regions. These events have an impact on the dose cost calculation and reduce recurrence of dose events at each region. The authors studied the improvement of the new temporal optimization algorithm in terms of the L1-norm minimization of the sum of overdose and underdose compared to not accounting for previous dose events. For evaluation, the authors simulated the delivery of 5 conformal and 14 intensity-modulated radiotherapy (IMRT)-plans with 7 3D patient measured tumor motion traces. Results: Simulations with conformal shapes showed an improvement of L1-norm up to 8.5% after 100 MLC modification steps. Experiments showed comparable improvements with the same type of treatment plans. Conclusions: A novel leaf sequencing optimization algorithm which considers previous dose events for MLC tracking radiotherapy has been developed and investigated. Reductions in underdose/overdose are observed for conformal and IMRT delivery.

  18. Sci-Thur PM - Colourful Interactions: Highlights 08: ARC TBI using Single-Step Optimized VMAT Fields

    International Nuclear Information System (INIS)

    Hudson, Alana; Gordon, Deborah; Moore, Roseanne; Balogh, Alex; Pierce, Greg

    2016-01-01

    Purpose: This work outlines a new TBI delivery technique to replace a lateral POP full bolus technique. The new technique is done with VMAT arc delivery, without bolus, treating the patient prone and supine. The benefits of the arc technique include: increased patient experience and safety, better dose conformity, better organ at risk sparing, decreased therapist time and reduction of therapist injuries. Methods: In this work we build on a technique developed by Jahnke et al. We use standard arc fields with gantry speeds corrected for varying distance to the patient followed by a single step VMAT optimization on a patient CT to increase dose inhomogeneity and to reduce dose to the lungs (vs. blocks). To compare the arc TBI technique to our full bolus technique, we produced plans on patient CTs for both techniques and evaluated several dosimetric parameters using an ANOVA test. Results and Conclusions: The arc technique is able reduce both the hot areas to the body (D2% reduced from 122.2% to 111.8% p<0.01) and the lungs (mean lung dose reduced from 107.5% to 99.1%, p<0.01), both statistically significant, while maintaining coverage (D98% = 97.8% vs. 94.6%, p=0.313, not statistically significant). We developed a more patient and therapist-friendly TBI treatment technique that utilizes single-step optimized VMAT plans. It was found that this technique was dosimetrically equivalent to our previous lateral technique in terms of coverage and statistically superior in terms of reduced lung dose.

  19. Sci-Thur PM - Colourful Interactions: Highlights 08: ARC TBI using Single-Step Optimized VMAT Fields

    Energy Technology Data Exchange (ETDEWEB)

    Hudson, Alana; Gordon, Deborah; Moore, Roseanne; Balogh, Alex; Pierce, Greg [Tom Baker Cancer Centre (Canada)

    2016-08-15

    Purpose: This work outlines a new TBI delivery technique to replace a lateral POP full bolus technique. The new technique is done with VMAT arc delivery, without bolus, treating the patient prone and supine. The benefits of the arc technique include: increased patient experience and safety, better dose conformity, better organ at risk sparing, decreased therapist time and reduction of therapist injuries. Methods: In this work we build on a technique developed by Jahnke et al. We use standard arc fields with gantry speeds corrected for varying distance to the patient followed by a single step VMAT optimization on a patient CT to increase dose inhomogeneity and to reduce dose to the lungs (vs. blocks). To compare the arc TBI technique to our full bolus technique, we produced plans on patient CTs for both techniques and evaluated several dosimetric parameters using an ANOVA test. Results and Conclusions: The arc technique is able reduce both the hot areas to the body (D2% reduced from 122.2% to 111.8% p<0.01) and the lungs (mean lung dose reduced from 107.5% to 99.1%, p<0.01), both statistically significant, while maintaining coverage (D98% = 97.8% vs. 94.6%, p=0.313, not statistically significant). We developed a more patient and therapist-friendly TBI treatment technique that utilizes single-step optimized VMAT plans. It was found that this technique was dosimetrically equivalent to our previous lateral technique in terms of coverage and statistically superior in terms of reduced lung dose.

  20. A Two-Step Approach for Analytical Optimal Hedging with Two Triggers

    Directory of Open Access Journals (Sweden)

    Tiesong Hu

    2016-02-01

    Full Text Available Hedging is widely used to mitigate severe water shortages in the operation of reservoirs during droughts. Rationing is usually instituted with one hedging policy, which is based only on one trigger, i.e., initial storage level or current water availability. It may perform poorly in balancing the benefits of a release during the current period versus those of carryover storage during future droughts. This study proposes a novel hedging rule to improve the efficiency of a reservoir operated to supply water, in which, based on two triggers, hedging is initiated with three different hedging sub-rules through a two-step approach. In the first step, the sub-rule is triggered based on the relationship between the initial reservoir storage level and the level of the target rule curve or the firm rule curve at the end of the current period. This step is mainly concerned with increasing the water level or not in the current period. Hedging is then triggered under the sub-rule based on current water availability in the second step, in which the trigger implicitly considers both initial and ending reservoir storage levels in the current period. Moreover, the amount of hedging is analytically derived based on the Karush–Kuhn–Tucker (KKT conditions. In addition, the hedging parameters are optimized using the improved particle swarm optimization (IPSO algorithm coupled with a rule-based simulation. A single water-supply reservoir located in Hubei Province in central China is selected as a case study. The operation results show that the proposed rule is reasonable and significantly improves the reservoir operation performance for both long-term and critical periods relative to other operation policies, such as the standard operating policy (SOP and the most commonly used hedging rules.

  1. Two-step optimization of pressure and recovery of reverse osmosis desalination process.

    Science.gov (United States)

    Liang, Shuang; Liu, Cui; Song, Lianfa

    2009-05-01

    Driving pressure and recovery are two primary design variables of a reverse osmosis process that largely determine the total cost of seawater and brackish water desalination. A two-step optimization procedure was developed in this paper to determine the values of driving pressure and recovery that minimize the total cost of RO desalination. It was demonstrated that the optimal net driving pressure is solely determined by the electricity price and the membrane price index, which is a lumped parameter to collectively reflect membrane price, resistance, and service time. On the other hand, the optimal recovery is determined by the electricity price, initial osmotic pressure, and costs for pretreatment of raw water and handling of retentate. Concise equations were derived for the optimal net driving pressure and recovery. The dependences of the optimal net driving pressure and recovery on the electricity price, membrane price, and costs for raw water pretreatment and retentate handling were discussed.

  2. Optimal order and time-step criterion for Aarseth-type N-body integrators

    International Nuclear Information System (INIS)

    Makino, Junichiro

    1991-01-01

    How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed. 18 refs

  3. Improvement of the temporal resolution of cardiac CT reconstruction algorithms using an optimized filtering step

    International Nuclear Information System (INIS)

    Roux, S.; Desbat, L.; Koenig, A.; Grangeat, P.

    2005-01-01

    In this paper we study a property of the filtering step of multi-cycle reconstruction algorithm used in the field of cardiac CT. We show that the common filtering step procedure is not optimal in the case of divergent geometry and decrease slightly the temporal resolution. We propose to use the filtering procedure related to the work of Noo at al ( F.Noo, M. Defrise, R. Clakdoyle, and H. Kudo. Image reconstruction from fan-beam projections on less than a short-scan. Phys. Med.Biol., 47:2525-2546, July 2002)and show that this alternative allows to reach the optimal temporal resolution with the same computational effort. (N.C.)

  4. Two-Step Optimization for Spatial Accessibility Improvement: A Case Study of Health Care Planning in Rural China

    Directory of Open Access Journals (Sweden)

    Jing Luo

    2017-01-01

    Full Text Available A recent advancement in location-allocation modeling formulates a two-step approach to a new problem of minimizing disparity of spatial accessibility. Our field work in a health care planning project in a rural county in China indicated that residents valued distance or travel time from the nearest hospital foremost and then considered quality of care including less waiting time as a secondary desirability. Based on the case study, this paper further clarifies the sequential decision-making approach, termed “two-step optimization for spatial accessibility improvement (2SO4SAI.” The first step is to find the best locations to site new facilities by emphasizing accessibility as proximity to the nearest facilities with several alternative objectives under consideration. The second step adjusts the capacities of facilities for minimal inequality in accessibility, where the measure of accessibility accounts for the match ratio of supply and demand and complex spatial interaction between them. The case study illustrates how the two-step optimization method improves both aspects of spatial accessibility for health care access in rural China.

  5. Numerical optimization of quasi-optical mode converter for frequency step-tunable gyrotron

    International Nuclear Information System (INIS)

    Drumm, O.

    2002-08-01

    This work concentrates on the design of a quasi-optical mode converter for a frequency step-tunable gyrotron. Special attention is paid to the optimization of the conversion and forming of the exited wave of different frequencies inside the resonator. The investigations were part of the HGF-strategy-fonds-project ''Optimization of Tokamak Operation with controlled ECRH-Deposition''. In the resonator of the gyrotron modes can be exited at frequencies between 105 and 140 GHz. With the designed converter the desired field distribution at the output window for all frequencies will be approximately obtained. The newly gained knowledge and invented synthesis methods are applied to this practical example and verified. In this work, the waveguide antenna and the mirror system of the quasi-optical mode converter are presented separately from each other. At the beginning the synthesis of the aperture antenna for a frequency step-tunable design of the Vlasov-type as well as the Denisov-type is considered. As a conclusion of the investigation, the important parameters for the design of all antennas are summarized and the frequency behavior is compared. In the second part of this work new broadband design methods for the synthesis of the mirror surface are presented. These mirrors make an optimal wave forming for all frequencies equally possible. Therefore new quality criteria are introduced for the broadband evaluation of the mirror. Afterwards the surface is varied until the criteria reach an optimum. For the numerical optimization, in this work the gradient method and the extended Katsenelenbaum-Semenov algorithm are invented and applied. The efficient realization of the described algorithms on a computer is the significant point. The theoretical background of the presented methods for the synthesis of a mirror system is based on the general solution of the Helmholtz equation. Due to this, these methods can be utilized in other fields outside the microwave applications in

  6. Optimization of a Multi-Step Procedure for Isolation of Chicken Bone Collagen

    OpenAIRE

    Cansu, ?mran; Boran, G?khan

    2015-01-01

    Chicken bone is not adequately utilized despite its high nutritional value and protein content. Although not a common raw material, chicken bone can be used in many different ways besides manufacturing of collagen products. In this study, a multi-step procedure was optimized to isolate chicken bone collagen for higher yield and quality for manufacture of collagen products. The chemical composition of chicken bone was 2.9% nitrogen corresponding to about 15.6% protein, 9.5% fat, 14.7% mineral ...

  7. Estimation of total Effort and Effort Elapsed in Each Step of Software Development Using Optimal Bayesian Belief Network

    Directory of Open Access Journals (Sweden)

    Fatemeh Zare Baghiabad

    2017-09-01

    Full Text Available Accuracy in estimating the needed effort for software development caused software effort estimation to be a challenging issue. Beside estimation of total effort, determining the effort elapsed in each software development step is very important because any mistakes in enterprise resource planning can lead to project failure. In this paper, a Bayesian belief network was proposed based on effective components and software development process. In this model, the feedback loops are considered between development steps provided that the return rates are different for each project. Different return rates help us determine the percentages of the elapsed effort in each software development step, distinctively. Moreover, the error measurement resulted from optimized effort estimation and the optimal coefficients to modify the model are sought. The results of the comparison between the proposed model and other models showed that the model has the capability to highly accurately estimate the total effort (with the marginal error of about 0.114 and to estimate the effort elapsed in each software development step.

  8. A step-by-step guide to systematically identify all relevant animal studies

    Science.gov (United States)

    Leenaars, Marlies; Hooijmans, Carlijn R; van Veggel, Nieky; ter Riet, Gerben; Leeflang, Mariska; Hooft, Lotty; van der Wilt, Gert Jan; Tillema, Alice; Ritskes-Hoitinga, Merel

    2012-01-01

    Before starting a new animal experiment, thorough analysis of previously performed experiments is essential from a scientific as well as from an ethical point of view. The method that is most suitable to carry out such a thorough analysis of the literature is a systematic review (SR). An essential first step in an SR is to search and find all potentially relevant studies. It is important to include all available evidence in an SR to minimize bias and reduce hampered interpretation of experimental outcomes. Despite the recent development of search filters to find animal studies in PubMed and EMBASE, searching for all available animal studies remains a challenge. Available guidelines from the clinical field cannot be copied directly to the situation within animal research, and although there are plenty of books and courses on searching the literature, there is no compact guide available to search and find relevant animal studies. Therefore, in order to facilitate a structured, thorough and transparent search for animal studies (in both preclinical and fundamental science), an easy-to-use, step-by-step guide was prepared and optimized using feedback from scientists in the field of animal experimentation. The step-by-step guide will assist scientists in performing a comprehensive literature search and, consequently, improve the scientific quality of the resulting review and prevent unnecessary animal use in the future. PMID:22037056

  9. Multispecies Coevolution Particle Swarm Optimization Based on Previous Search History

    Directory of Open Access Journals (Sweden)

    Danping Wang

    2017-01-01

    Full Text Available A hybrid coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH is proposed. Previous search history memorized into the Binary Space Partitioning fitness tree can effectively restrain the individuals’ revisit phenomenon. The whole population is partitioned into several subspecies and cooperative coevolution is realized by an information communication mechanism between subspecies, which can enhance the global search ability of particles and avoid premature convergence to local optimum. To demonstrate the power of the method, comparisons between the proposed algorithm and state-of-the-art algorithms are grouped into two categories: 10 basic benchmark functions (10-dimensional and 30-dimensional, 10 CEC2005 benchmark functions (30-dimensional, and a real-world problem (multilevel image segmentation problems. Experimental results show that MCPSO-PSH displays a competitive performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests.

  10. Biodiesel production from microalgae Spirulina maxima by two step process: Optimization of process variable

    Directory of Open Access Journals (Sweden)

    M.A. Rahman

    2017-04-01

    Full Text Available Biodiesel from green energy source is gaining tremendous attention for ecofriendly and economically aspect. In this investigation, a two-step process was developed for the production of biodiesel from microalgae Spirulina maxima and determined best operating conditions for the steps. In the first stage, acid esterification was conducted to lessen acid value (AV from 10.66 to 0.51 mgKOH/g of the feedstock and optimal conditions for maximum esterified oil yielding were found at molar ratio 12:1, temperature 60°C, 1% (wt% H2SO4, and mixing intensity 400 rpm for a reaction time of 90 min. The second stage alkali transesterification was carried out for maximum biodiesel yielding (86.1% and optimal conditions were found at molar ratio 9:1, temperature 65°C, mixing intensity 600 rpm, catalyst concentration 0.75% (wt% KOH for a reaction time of 20 min. Biodiesel were analyzed according to ASTM standards and results were within standards limit. Results will helpful to produce third generation algal biodiesel from microalgae Spirulina maxima in an efficient manner.

  11. The optimal design of stepped wedge trials with equal allocation to sequences and a comparison to other trial designs.

    Science.gov (United States)

    Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew

    2017-12-01

    Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A

  12. Two-Step Production of Phenylpyruvic Acid from L-Phenylalanine by Growing and Resting Cells of Engineered Escherichia coli: Process Optimization and Kinetics Modeling.

    Directory of Open Access Journals (Sweden)

    Ying Hou

    Full Text Available Phenylpyruvic acid (PPA is widely used in the pharmaceutical, food, and chemical industries. Here, a two-step bioconversion process, involving growing and resting cells, was established to produce PPA from l-phenylalanine using the engineered Escherichia coli constructed previously. First, the biotransformation conditions for growing cells were optimized (l-phenylalanine concentration 20.0 g·L-1, temperature 35°C and a two-stage temperature control strategy (keep 20°C for 12 h and increase the temperature to 35°C until the end of biotransformation was performed. The biotransformation conditions for resting cells were then optimized in 3-L bioreactor and the optimized conditions were as follows: agitation speed 500 rpm, aeration rate 1.5 vvm, and l-phenylalanine concentration 30 g·L-1. The total maximal production (mass conversion rate reached 29.8 ± 2.1 g·L-1 (99.3% and 75.1 ± 2.5 g·L-1 (93.9% in the flask and 3-L bioreactor, respectively. Finally, a kinetic model was established, and it was revealed that the substrate and product inhibition were the main limiting factors for resting cell biotransformation.

  13. Two-Step Production of Phenylpyruvic Acid from L-Phenylalanine by Growing and Resting Cells of Engineered Escherichia coli: Process Optimization and Kinetics Modeling.

    Science.gov (United States)

    Hou, Ying; Hossain, Gazi Sakir; Li, Jianghua; Shin, Hyun-Dong; Liu, Long; Du, Guocheng; Chen, Jian

    2016-01-01

    Phenylpyruvic acid (PPA) is widely used in the pharmaceutical, food, and chemical industries. Here, a two-step bioconversion process, involving growing and resting cells, was established to produce PPA from l-phenylalanine using the engineered Escherichia coli constructed previously. First, the biotransformation conditions for growing cells were optimized (l-phenylalanine concentration 20.0 g·L-1, temperature 35°C) and a two-stage temperature control strategy (keep 20°C for 12 h and increase the temperature to 35°C until the end of biotransformation) was performed. The biotransformation conditions for resting cells were then optimized in 3-L bioreactor and the optimized conditions were as follows: agitation speed 500 rpm, aeration rate 1.5 vvm, and l-phenylalanine concentration 30 g·L-1. The total maximal production (mass conversion rate) reached 29.8 ± 2.1 g·L-1 (99.3%) and 75.1 ± 2.5 g·L-1 (93.9%) in the flask and 3-L bioreactor, respectively. Finally, a kinetic model was established, and it was revealed that the substrate and product inhibition were the main limiting factors for resting cell biotransformation.

  14. Two-Step Production of Phenylpyruvic Acid from L-Phenylalanine by Growing and Resting Cells of Engineered Escherichia coli: Process Optimization and Kinetics Modeling

    Science.gov (United States)

    Hou, Ying; Hossain, Gazi Sakir; Li, Jianghua; Shin, Hyun-dong; Liu, Long; Du, Guocheng; Chen, Jian

    2016-01-01

    Phenylpyruvic acid (PPA) is widely used in the pharmaceutical, food, and chemical industries. Here, a two-step bioconversion process, involving growing and resting cells, was established to produce PPA from l-phenylalanine using the engineered Escherichia coli constructed previously. First, the biotransformation conditions for growing cells were optimized (l-phenylalanine concentration 20.0 g·L−1, temperature 35°C) and a two-stage temperature control strategy (keep 20°C for 12 h and increase the temperature to 35°C until the end of biotransformation) was performed. The biotransformation conditions for resting cells were then optimized in 3-L bioreactor and the optimized conditions were as follows: agitation speed 500 rpm, aeration rate 1.5 vvm, and l-phenylalanine concentration 30 g·L−1. The total maximal production (mass conversion rate) reached 29.8 ± 2.1 g·L−1 (99.3%) and 75.1 ± 2.5 g·L−1 (93.9%) in the flask and 3-L bioreactor, respectively. Finally, a kinetic model was established, and it was revealed that the substrate and product inhibition were the main limiting factors for resting cell biotransformation. PMID:27851793

  15. An Umeclidinium membrane sensor; Two-step optimization strategy for improved responses.

    Science.gov (United States)

    Yehia, Ali M; Monir, Hany H

    2017-09-01

    In the scientific context of membrane sensors and improved experimentation, we devised an experimentally designed protocol for sensor optimization. Two-step strategy was implemented for Umeclidinium bromide (UMEC) analysis which is a novel quinuclidine-based muscarinic antagonist used for maintenance treatment of symptoms accompanied with chronic obstructive pulmonary disease. In the first place, membrane components were screened for ideal ion exchanger, ionophore and plasticizer using three categorical factors at three levels in Taguchi design. Secondly, experimentally designed optimization was followed in order to tune the sensor up for finest responses. Twelve experiments were randomly carried out in a continuous factor design. Nernstian response, detection limit and selectivity were assigned as responses in these designs. The optimized membrane sensor contained tetrakis-[3,5-bis(trifluoro- methyl)phenyl] borate (0.44wt%) and calix[6]arene (0.43wt%) in 50.00% PVC plasticized with 49.13wt% 2-ni-tro-phenyl octylether. This sensor, along with an optimum concentration of inner filling solution (2×10 -4 molL -1 UMEC) and 2h of soaking time, attained the design objectives. Nernstian response approached 59.7mV/decade and detection limit decreased by about two order of magnitude (8×10 -8 mol L -1 ) through this optimization protocol. The proposed sensor was validated for UMEC determination in its linear range (3.16×10 -7 -1×10 -3 mol L -1 ) and challenged for selective discrimination of other congeners and inorganic cations. Results of INCRUSE ELLIPTA ® inhalation powder analyses obtained from the proposed sensor and manufacturer's UPLC were statistically compared. Moreover the proposed sensor was successfully used for the determination of UMEC in plasma samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Two-step reconstruction method using global optimization and conjugate gradient for ultrasound-guided diffuse optical tomography.

    Science.gov (United States)

    Tavakoli, Behnoosh; Zhu, Quing

    2013-01-01

    Ultrasound-guided diffuse optical tomography (DOT) is a promising method for characterizing malignant and benign lesions in the female breast. We introduce a new two-step algorithm for DOT inversion in which the optical parameters are estimated with the global optimization method, genetic algorithm. The estimation result is applied as an initial guess to the conjugate gradient (CG) optimization method to obtain the absorption and scattering distributions simultaneously. Simulations and phantom experiments have shown that the maximum absorption and reduced scattering coefficients are reconstructed with less than 10% and 25% errors, respectively. This is in contrast with the CG method alone, which generates about 20% error for the absorption coefficient and does not accurately recover the scattering distribution. A new measure of scattering contrast has been introduced to characterize benign and malignant breast lesions. The results of 16 clinical cases reconstructed with the two-step method demonstrates that, on average, the absorption coefficient and scattering contrast of malignant lesions are about 1.8 and 3.32 times higher than the benign cases, respectively.

  17. Numerical sensitivity computation for discontinuous gradient-only optimization problems using the complex-step method

    CSIR Research Space (South Africa)

    Wilke, DN

    2012-07-01

    Full Text Available problems that utilise remeshing (i.e. the mesh topology is allowed to change) between design updates. Here, changes in mesh topology result in abrupt changes in the discretization error of the computed response. These abrupt changes in turn manifests... in shape optimization but may be present whenever (partial) differential equations are ap- proximated numerically with non-constant discretization methods e.g. remeshing of spatial domains or automatic time stepping in temporal domains. Keywords: Complex...

  18. On the Convexity of Step out - Step in Sequencing Games

    NARCIS (Netherlands)

    Musegaas, Marieke; Borm, Peter; Quant, Marieke

    2016-01-01

    The main result of this paper is the convexity of Step out - Step in (SoSi) sequencing games, a class of relaxed sequencing games first analyzed by Musegaas, Borm, and Quant (2015). The proof makes use of a polynomial time algorithm determining the value and an optimal processing order for an

  19. First step in optimization doses in computed tomography

    International Nuclear Information System (INIS)

    Mecca, Fernando; Nascimeto, Vitor; Dias, K. Simone

    2008-01-01

    Full text: Introduction: The evolution reached by computed tomography in the last 10 years made this image modality have utmost importance for the analysis and diagnosis of a broad range of pathologies. Thus, a significant increase in the number of examinations using CT can be observed. Hence, the doses of radiation in such analyses became a factor of concern, because they increase the collective dose over the population. The use of the 'ALARA' principle in computed tomography became a necessity and the first step to perform it is to know the doses applied in each exam, building, then, a methodology to reduce their values without losing diagnostic information. Methodology: In the optimization process of dose values with CT scan at INCA (National Institute of Cancer, Rio de Janeiro-Brazil), examinations carried through in two distinct equipments were analyzed. For each room, samples of 10 patients were taken from each examination, both for adult and child patients: thorax (including high resolution exams), abdomen, pelvis and skull. The values of C VOL and P kl were estimated from the table values of nC w as well as from the values established in the dosimetry carried through with head and abdomen phantoms. Results: In adult thorax examinations, the C VOL values have ranged between 14 and 21 mGy and P kl values from 230 and 590 mGy*cm. For head examinations the range was between 8 and 16 mGy and 350 and 600 mGy.cm. For abdomen, it ranged between 6 and 16 mGy and 200 and 440 mGy*cm. For child patients the results are in the same range of adults in all examinations. Conclusion: There was evident in this work the necessity of the optimization doses in protocols of children because his doses are the same of the adult patients them is necessary to study specific protocols for this kind of patients at least. (author)

  20. [Optimization of one-step pelletization technology of Biqiu granules by Plackett-Burman design and Box-Behnken response surface methodology].

    Science.gov (United States)

    Zhang, Yan-jun; Liu, Li-li; Hu, Jun-hua; Wu, Yun; Chao, En-xiang; Xiao, Wei

    2015-11-01

    First with the qualified rate of granules as the evaluation index, significant influencing factors were firstly screened by Plackett-Burman design. Then, with the qualified rate and moisture content as the evaluation indexes, significant factors that affect one-step pelletization technology were further optimized by Box-Behnken design; experimental data were imitated by multiple regression and second-order polynomial equation; and response surface method was used for predictive analysis of optimal technology. The best conditions were as follows: inlet air temperature of 85 degrees C, sample introduction speed of 33 r x min(-1), density of concrete 1. 10. One-step pelletization technology of Biqiu granules by Plackett-Burman design and Box-Behnken response surface methodology was stable and feasible with good predictability, which provided reliable basis for the industrialized production of Biqiu granules.

  1. 2-Step IMAT and 2-Step IMRT in three dimensions

    International Nuclear Information System (INIS)

    Bratengeier, Klaus

    2005-01-01

    In two dimensions, 2-Step Intensity Modulated Arc Therapy (2-Step IMAT) and 2-Step Intensity Modulated Radiation Therapy (IMRT) were shown to be powerful methods for the optimization of plans with organs at risk (OAR) (partially) surrounded by a target volume (PTV). In three dimensions, some additional boundary conditions have to be considered to establish 2-Step IMAT as an optimization method. A further aim was to create rules for ad hoc adaptations of an IMRT plan to a daily changing PTV-OAR constellation. As a test model, a cylindrically symmetric PTV-OAR combination was used. The centrally placed OAR can adapt arbitrary diameters with different gap widths toward the PTV. Along the rotation axis the OAR diameter can vary, the OAR can even vanish at some axis positions, leaving a circular PTV. The width and weight of the second segment were the free parameters to optimize. The objective function f to minimize was the root of the integral of the squared difference of the dose in the target volume and a reference dose. For the problem, two local minima exist. Therefore, as a secondary criteria, the magnitude of hot and cold spots were taken into account. As a result, the solution with a larger segment width was recommended. From plane to plane for varying radii of PTV and OAR and for different gaps between them, different sets of weights and widths were optimal. Because only one weight for one segment shall be used for all planes (respectively leaf pairs), a strategy for complex three-dimensional (3-D) cases was established to choose a global weight. In a second step, a suitable segment width was chosen, minimizing f for this global weight. The concept was demonstrated in a planning study for a cylindrically symmetric example with a large range of different radii of an OAR along the patient axis. The method is discussed for some classes of tumor/organ at risk combinations. Noncylindrically symmetric cases were treated exemplarily. The product of width and weight of

  2. Using Aspen plus in thermodynamics instruction a step-by-step guide

    CERN Document Server

    Sandler, Stanley I

    2015-01-01

    A step-by-step guide for students (and faculty) on the use of Aspen in teaching thermodynamics Used for a wide variety of important engineering tasks, Aspen Plus software is a modeling tool used for conceptual design, optimization, and performance monitoring of chemical processes. After more than twenty years, it remains one of the most popular and powerful chemical engineering simulation programs used both industrially and academically. Using Aspen Plus in Thermodynamics Instruction: A Step by Step Guide introduces the reader to the use of Aspen Plus in courses in thermodynamics. It prov

  3. Two-step milling on the carbonyl iron particles and optimizing on the composite absorption

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Yonggang, E-mail: xuyonggang221@163.com [Science and Technology on Electromagnetic Scattering Laboratory, Shanghai 200438 (China); Yuan, Liming; Wang, Xiaobing [Science and Technology on Electromagnetic Scattering Laboratory, Shanghai 200438 (China); Zhang, Deyuan [Bionic and Micro/Nano/Bio Manufacturing Technology Research Center, School of Mechanical Engineering and Automation, Beihang University, Beijing 100191 (China)

    2016-08-15

    The flaky carbonyl iron particles (CIPs) were prepared using a two-step milling process. The surface morphology was characterized by the scanning electron microscopy, the static magnetic property was evaluated on a vibrating sample magnetometer and X-ray diffraction (XRD) patterns were done to analyze the particle crystal grain structure. The complex permittivity and permeability were measured using a vector network analyzer in the frequency range of 2–18 GHz. Then Hermite interpolation based on the calculated scattering parameters of the tested composite was used to derive the permittivity and permeability of the composite with random volume content. The results showed that the saturation magnetization value of the flaky CIPs decreased as the CIPs was changed to the flakes by high and low speeding milling. The diffraction peaks of the single α-Fe existed in the XRD pattern of CIPs, and the characteristic peaks was broad and the intensity of the diffraction pattern was lower as the high-speeding milling time increased. The sample H2L20 had the largest particle size, the average diameter was 8.64 μm, the thickness was 0.59 μm according to the fitted aspect ratio 14.65. The derived permittivity and permeability using the Hermite interpolation was accurate compared with the tested result, the deviation was about 0.39 + j0.45 and 2.5 + j0.51. Finally, the genetic algorithm was used to optimize the thickness of the CIPs composite of a wide absorbing band of 8–18 GHz. The optimized reflection loss (RL) result showed that the absorbing composites with thickness 1.47 mm had an excellent absorbing property (RL < −10 dB) in 8–18 GHz. - Graphical abstract: The property of absorber added two speeding milling CIPs could be enhanced using the genetic algorithm. - Highlights: • Flaky CIPs were prepared using a two-step milling process. • The permeability increased during the low speeding milling. • The aspect ratio of flaky CIPs increased in the optimized process

  4. A novel two-step optimization method for tandem and ovoid high-dose-rate brachytherapy treatment for locally advanced cervical cancer.

    Science.gov (United States)

    Sharma, Manju; Fields, Emma C; Todor, Dorin A

    2015-01-01

    To present a novel method allowing fast volumetric optimization of tandem and ovoid high-dose-rate treatments and to quantify its benefits. Twenty-seven CT-based treatment plans from 6 consecutive cervical cancer patients treated with four to five intracavitary tandem and ovoid insertions were used. Initial single-step optimized plans were manually optimized, approved, and delivered plans created with a goal to cover high-risk clinical target volume (HR-CTV) with D90 >90% and minimize rectum, bladder, and sigmoid D2cc. For the two-step optimized (TSO) plan, each single-step optimized plan was replanned adding a structure created from prescription isodose line to the existent physician delineated HR-CTV, rectum, bladder, and sigmoid. New, more rigorous dose-volume histogram constraints for the critical organs at risks (OARs) were used for the optimization. HR-CTV D90 and OAR D2ccs were evaluated in both plans. TSO plans had consistently smaller D2ccs for all three OARs while preserving HR-CTV D90. On plans with "excellent" CTV coverage, average D90 of 96% (91-102%), sigmoid, bladder, and rectum D2cc, respectively, reduced on average by 37% (16-73%), 28% (20-47%), and 27% (15-45%). Similar reductions were obtained on plans with "good" coverage, average D90 of 93% (90-99%). For plans with "inferior" coverage, average D90 of 81%, the coverage increased to 87% with concurrent D2cc reductions of 31%, 18%, and 11% for sigmoid, bladder, and rectum, respectively. The TSO can be added with minimal planning time increase but with the potential of dramatic and systematic reductions in OAR D2ccs and in some cases with concurrent increase in target dose coverage. These single-fraction modifications would be magnified over the course of four to five intracavitary insertions and may have real clinical implications in terms of decreasing both acute and late toxicities. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  5. Crowdsourcing step-by-step information extraction to enhance existing how-to videos

    OpenAIRE

    Nguyen, Phu Tran; Weir, Sarah; Guo, Philip J.; Miller, Robert C.; Gajos, Krzysztof Z.; Kim, Ju Ho

    2014-01-01

    Millions of learners today use how-to videos to master new skills in a variety of domains. But browsing such videos is often tedious and inefficient because video player interfaces are not optimized for the unique step-by-step structure of such videos. This research aims to improve the learning experience of existing how-to videos with step-by-step annotations. We first performed a formative study to verify that annotations are actually useful to learners. We created ToolScape, an interac...

  6. Stepped MS(All) Relied Transition (SMART): An approach to rapidly determine optimal multiple reaction monitoring mass spectrometry parameters for small molecules.

    Science.gov (United States)

    Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping

    2016-02-11

    Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. OPTIMAL practice conditions enhance the benefits of gradually increasing error opportunities on retention of a stepping sequence task.

    Science.gov (United States)

    Levac, Danielle; Driscoll, Kate; Galvez, Jessica; Mercado, Kathleen; O'Neil, Lindsey

    2017-12-01

    Physical therapists should implement practice conditions that promote motor skill learning after neurological injury. Errorful and errorless practice conditions are effective for different populations and tasks. Errorful learning provides opportunities for learners to make task-relevant choices. Enhancing learner autonomy through choice opportunities is a key component of the Optimizing Performance through Intrinsic Motivation and Attention for Learning (OPTIMAL) theory of motor learning. The objective of this study was to evaluate the interaction between error opportunity frequency and OPTIMAL (autonomy-supportive) practice conditions during stepping sequence acquisition in a virtual environment. Forty healthy young adults were randomized to autonomy-supportive or autonomy-controlling practice conditions, which differed in instructional language, focus of attention (external vs internal) and positive versus negative nature of verbal and visual feedback. All participants practiced 40 trials of 4, six-step stepping sequences in a random order. Each of the 4 sequences offered different amounts of choice opportunities about the next step via visual cue presentation (4 choices; 1 choice; gradually increasing [1-2-3-4] choices, and gradually decreasing [4-3-2-1] choices). Motivation and engagement were measured by the Intrinsic Motivation Inventory (IMI) and the User Engagement Scale (UES). Participants returned 1-3 days later for retention tests, where learning was measured by time to complete each sequence. No choice cues were offered on retention. Participants in the autonomy-supportive group outperformed the autonomy-controlling group at retention on all sequences (mean difference 2.88s, p errorful (4 choice) sequence (p error opportunities over time, suggest that participants relied on implicit learning strategies for this full body task and that feedback about successes minimized errors and reduced their potential information-processing benefits. Subsequent

  8. Tax-Optimal Step-Up and Imperfect Loss Offset

    Directory of Open Access Journals (Sweden)

    Markus Diller

    2012-05-01

    Full Text Available In the field of mergers and acquisitions, German and international tax law allow for several opportunities to step up a firm's assets, i.e., to revaluate the assets at fair market values. When a step-up is performed the taxpayer recognizes a taxable gain, but also obtains tax benefits in the form of higher future depreciation allowances associated with stepping up the tax base of the assets. This tax-planning problem is well known in taxation literature and can also be applied to firm valuation in the presence of taxation. However, the known models usually assume a perfect loss offset. If this assumption is abandoned, the depreciation allowances may lose value as they become tax effective at a later point in time, or even never if there are not enough cash flows to be offset against. This aspect is especiallyrelevant if future cash flows are assumed to be uncertain. This paper shows that a step-up may be disadvantageous or a firm overvalued if these aspects are not integrated into the basic calculus. Compared to the standard approach, assets should be stepped up only in a few cases and - under specific conditions - at a later point in time. Firm values may be considerably lower under imperfect loss offset.

  9. Symplectic integrators with adaptive time steps

    Science.gov (United States)

    Richardson, A. S.; Finn, J. M.

    2012-01-01

    In recent decades, there have been many attempts to construct symplectic integrators with variable time steps, with rather disappointing results. In this paper, we identify the causes for this lack of performance, and find that they fall into two categories. In the first, the time step is considered a function of time alone, Δ = Δ(t). In this case, backward error analysis shows that while the algorithms remain symplectic, parametric instabilities may arise because of resonance between oscillations of Δ(t) and the orbital motion. In the second category the time step is a function of phase space variables Δ = Δ(q, p). In this case, the system of equations to be solved is analyzed by introducing a new time variable τ with dt = Δ(q, p) dτ. The transformed equations are no longer in Hamiltonian form, and thus do not benefit from integration methods which would be symplectic for Hamiltonian systems. We analyze two methods for integrating the transformed equations which do, however, preserve the structure of the original equations. The first is an extended phase space method, which has been successfully used in previous studies of adaptive time step symplectic integrators. The second, novel, method is based on a non-canonical mixed-variable generating function. Numerical trials for both of these methods show good results, without parametric instabilities or spurious growth or damping. It is then shown how to adapt the time step to an error estimate found by backward error analysis, in order to optimize the time-stepping scheme. Numerical results are obtained using this formulation and compared with other time-stepping schemes for the extended phase space symplectic method.

  10. Algorithm of axial fuel optimization based in progressive steps of turned search; Algoritmo de optimizacion axial de combustible basado en etapas progresivas de busqueda de entorno

    Energy Technology Data Exchange (ETDEWEB)

    Martin del Campo, C.; Francois, J.L. [Laboratorio de Analisis en Ingenieria de Reactores Nucleares, FI-UNAM, Paseo Cuauhnahuac 8532, Jiutepec, Morelos (Mexico)

    2003-07-01

    The development of an algorithm for the axial optimization of fuel of boiling water reactors (BWR) is presented. The algorithm is based in a serial optimizations process in the one that the best solution in each stage is the starting point of the following stage. The objective function of each stage adapts to orient the search toward better values of one or two parameters leaving the rest like restrictions. Conform to it advances in those optimization stages, it is increased the fineness of the evaluation of the investigated designs. The algorithm is based on three stages, in the first one are used Genetic algorithms and in the two following Tabu Search. The objective function of the first stage it looks for to minimize the average enrichment of the one it assembles and to fulfill with the generation of specified energy for the operation cycle besides not violating none of the limits of the design base. In the following stages the objective function looks for to minimize the power factor peak (PPF) and to maximize the margin of shutdown (SDM), having as restrictions the one average enrichment obtained for the best design in the first stage and those other restrictions. The third stage, very similar to the previous one, it begins with the design of the previous stage but it carries out a search of the margin of shutdown to different exhibition steps with calculations in three dimensions (3D). An application to the case of the design of the fresh assemble for the fourth fuel reload of the Unit 1 reactor of the Laguna Verde power plant (U1-CLV) is presented. The obtained results show an advance in the handling of optimization methods and in the construction of the objective functions that should be used for the different design stages of the fuel assemblies. (Author)

  11. Zinc hexacyanoferrate film as an effective protecting layer in two-step and one-step electropolymerization of pyrrole on zinc substrate

    Energy Technology Data Exchange (ETDEWEB)

    Pournaghi-Azar, M.H. [Electroanalytical Chemistry Laboratory, Faculty of Chemistry, University of Tabriz, Tabriz (Iran, Islamic Republic of)]. E-mail: pournaghiazar@tabrizu.ac.ir; Nahalparvari, H. [Electroanalytical Chemistry Laboratory, Faculty of Chemistry, University of Tabriz, Tabriz (Iran, Islamic Republic of)

    2005-03-15

    The two-step and one-step electrosynthesis processes of polypyrrole (PPy) films on the zinc substrate are described. The two-step process includes (i) the zinc surface pretreatment with hexacyanoferrate ion in the aqueous medium in order to form a zinc hexacyanoferrate (ZnHCF) film non-blocking passive layer on the surface and with the view to prevent its reactivity and (ii) electropolymerization of pyrrole on the ZnHCF vertical bar Zn-modified electrode in aqueous pyrrole solution. In this context, both the non-electrolytic and electrolytic procedures were adapted, and the effect of some experimental conditions such as supporting electrolyte, pH and temperature of the solution at the zinc surface pretreatment step as well as pyrrole concentration and electrochemical techniques at the polymerization step was investigated. By optimizing the experimental conditions in both steps, we have obtained a homogeneous and strongly adherent PPy films on the zinc substrate. The one-step process is based on the use of an aqueous medium containing Fe(CN){sub 6}{sup 4-} and pyrrole. The ferrocyanide ion passivates the substrate by formation of ZnHCF film during the electropolymerization process of pyrrole and therefore makes it possible to obtain strongly adherent PPy films, with controlled thickness, either by cyclic voltammetry or by electrolysis at constant current or constant potential without any previously treatment of the zinc electrode surface. The polypyrrole films deposited on the zinc electrode were characterized by cyclic voltammetry and scanning electron microscopic (SEM) measurement.

  12. Zinc hexacyanoferrate film as an effective protecting layer in two-step and one-step electropolymerization of pyrrole on zinc substrate

    International Nuclear Information System (INIS)

    Pournaghi-Azar, M.H.; Nahalparvari, H.

    2005-01-01

    The two-step and one-step electrosynthesis processes of polypyrrole (PPy) films on the zinc substrate are described. The two-step process includes (i) the zinc surface pretreatment with hexacyanoferrate ion in the aqueous medium in order to form a zinc hexacyanoferrate (ZnHCF) film non-blocking passive layer on the surface and with the view to prevent its reactivity and (ii) electropolymerization of pyrrole on the ZnHCF vertical bar Zn-modified electrode in aqueous pyrrole solution. In this context, both the non-electrolytic and electrolytic procedures were adapted, and the effect of some experimental conditions such as supporting electrolyte, pH and temperature of the solution at the zinc surface pretreatment step as well as pyrrole concentration and electrochemical techniques at the polymerization step was investigated. By optimizing the experimental conditions in both steps, we have obtained a homogeneous and strongly adherent PPy films on the zinc substrate. The one-step process is based on the use of an aqueous medium containing Fe(CN) 6 4- and pyrrole. The ferrocyanide ion passivates the substrate by formation of ZnHCF film during the electropolymerization process of pyrrole and therefore makes it possible to obtain strongly adherent PPy films, with controlled thickness, either by cyclic voltammetry or by electrolysis at constant current or constant potential without any previously treatment of the zinc electrode surface. The polypyrrole films deposited on the zinc electrode were characterized by cyclic voltammetry and scanning electron microscopic (SEM) measurement

  13. GPU-Monte Carlo based fast IMRT plan optimization

    Directory of Open Access Journals (Sweden)

    Yongbao Li

    2014-03-01

    Full Text Available Purpose: Intensity-modulated radiation treatment (IMRT plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow.Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, a rough dose calculation is conducted with only a few number of particle per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final result.Results: For a lung case with 5317 beamlets, 105 particles per beamlet in the first round, and 108 particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec.Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.--------------------------------Cite this article as: Li Y, Tian Z

  14. Optimization of control poison management by dynamic programming

    International Nuclear Information System (INIS)

    Ponzoni Filho, P.

    1974-01-01

    A dynamic programming approach was used to optimize the poison distribution in the core of a nuclear power plant between reloading. This method was applied to a 500 M We PWR subject to two different fuel management policies. The beginning of a stage is marked by a fuel management decision. The state vector of the system is defined by the burnups in the three fuel zones of the core. The change of the state vector is computed in several time steps. A criticality conserving poison management pattern is chosen at the beginning of each step. The burnups at the end of a step are obtained by means of depletion calculations, assuming constant neutron distribution during the step. The violation of burnup and power peaking constraints during the step eliminates the corresponding end states. In the case of identical end states, all except that which produced the largest amount of energy, are eliminated. Among the several end states one is selected for the subsequent stage, when it is subjected to a fuel management decision. This selection is based on an optimally criterion previously chosen, such as: discharged fuel burnup maximization, energy generation cost minimization, etc. (author)

  15. Optimization of Two-Step Acid-Catalyzed Hydrolysis of Oil Palm Empty Fruit Bunch for High Sugar Concentration in Hydrolysate

    Directory of Open Access Journals (Sweden)

    Dongxu Zhang

    2014-01-01

    Full Text Available Getting high sugar concentrations in lignocellulosic biomass hydrolysate with reasonable yields of sugars is commercially attractive but very challenging. Two-step acid-catalyzed hydrolysis of oil palm empty fruit bunch (EFB was conducted to get high sugar concentrations in the hydrolysate. The biphasic kinetic model was used to guide the optimization of the first step dilute acid-catalyzed hydrolysis of EFB. A total sugar concentration of 83.0 g/L with a xylose concentration of 69.5 g/L and a xylose yield of 84.0% was experimentally achieved, which is in well agreement with the model predictions under optimal conditions (3% H2SO4 and 1.2% H3PO4, w/v, liquid to solid ratio 3 mL/g, 130°C, and 36 min. To further increase total sugar and xylose concentrations in hydrolysate, a second step hydrolysis was performed by adding fresh EFB to the hydrolysate at 130°C for 30 min, giving a total sugar concentration of 114.4 g/L with a xylose concentration of 93.5 g/L and a xylose yield of 56.5%. To the best of our knowledge, the total sugar and xylose concentrations are the highest among those ever reported for acid-catalyzed hydrolysis of lignocellulose.

  16. DOA Estimation of Low Altitude Target Based on Adaptive Step Glowworm Swarm Optimization-multiple Signal Classification Algorithm

    Directory of Open Access Journals (Sweden)

    Zhou Hao

    2015-06-01

    Full Text Available The traditional MUltiple SIgnal Classification (MUSIC algorithm requires significant computational effort and can not be employed for the Direction Of Arrival (DOA estimation of targets in a low-altitude multipath environment. As such, a novel MUSIC approach is proposed on the basis of the algorithm of Adaptive Step Glowworm Swarm Optimization (ASGSO. The virtual spatial smoothing of the matrix formed by each snapshot is used to realize the decorrelation of the multipath signal and the establishment of a fullorder correlation matrix. ASGSO optimizes the function and estimates the elevation of the target. The simulation results suggest that the proposed method can overcome the low altitude multipath effect and estimate the DOA of target readily and precisely without radar effective aperture loss.

  17. TH-E-BRE-08: GPU-Monte Carlo Based Fast IMRT Plan Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y; Tian, Z; Shi, F; Jiang, S; Jia, X [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2014-06-15

    Purpose: Intensity-modulated radiation treatment (IMRT) plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC) methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow. Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, rough beamlet dose calculations is conducted with only a small number of particles per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final Result. Results: For a lung case with 5317 beamlets, 10{sup 5} particles per beamlet in the first round, and 10{sup 8} particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec. Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.

  18. Rotor Cascade Shape Optimization with Unsteady Passing Wakes Using Implicit Dual-Time Stepping and a Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Eun Seok Lee

    2003-01-01

    Full Text Available An axial turbine rotor cascade-shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using an unsteady flow, Reynolds-averaged Navier-Stokes equations solver that was based on explicit, finite difference; Runge-Kutta multistage time marching; and the diagonalized alternating direction implicit scheme. The code utilized Baldwin-Lomax algebraic and k-ε turbulence modeling. The full approximation storage multigrid method and preconditioning were implemented as iterative convergence-acceleration techniques. An implicit dual-time stepping method was incorporated in order to simulate the unsteady flow fields. The objective function was defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed during the optimization. The design variables were several geometric parameters characterizing airfoil leading edge, camber, stagger angle, and inter-row spacing. The genetic algorithm was used as an optimizer, and the penalty method was introduced for combining the constraints with the objective function. Each individual's objective function was computed simultaneously by using a 32-processor distributedmemory computer. The optimization results indicated that only minor improvements are possible in unsteady rotor/stator aerodynamics by varying these geometric parameters.

  19. Calculation of depletion with optimal distribution of initial control poison

    International Nuclear Information System (INIS)

    Castro Lobo, P.D. de.

    1978-03-01

    The spatial depletion equations are linearized within the time intervals and their solution is obtained by modal analysis. At the beginning of life an optimal poison distribution that maximizes neutron economy and the corresponding flux is determined. At the start of the subsequent time steps the flux distributions are obtained by pertubation method in relation to the start of the previous time steps. The problem was studied with constant poison distribution in order to evaluate the influence of the poison at the beginning of life. The results obtained by the modal expansion techniques are satisfactory. However, the optimization of the initial distribution of the control poison does not indicate any significant effect on the core life [pt

  20. Microsoft® SQL Server® 2008 Step by Step

    CERN Document Server

    Hotek, Mike

    2009-01-01

    Teach yourself SQL Server 2008-one step at a time. Get the practical guidance you need to build database solutions that solve real-world business problems. Learn to integrate SQL Server data in your applications, write queries, develop reports, and employ powerful business intelligence systems.Discover how to:Install and work with core components and toolsCreate tables and index structuresManipulate and retrieve dataSecure, manage, back up, and recover databasesApply tuning plus optimization techniques to generate high-performing database applicationsOptimize availability through clustering, d

  1. A multiobjective interval programming model for wind-hydrothermal power system dispatching using 2-step optimization algorithm.

    Science.gov (United States)

    Ren, Kun; Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.

  2. STEPS: a grid search methodology for optimized peptide identification filtering of MS/MS database search results.

    Science.gov (United States)

    Piehowski, Paul D; Petyuk, Vladislav A; Sandoval, John D; Burnum, Kristin E; Kiebel, Gary R; Monroe, Matthew E; Anderson, Gordon A; Camp, David G; Smith, Richard D

    2013-03-01

    For bottom-up proteomics, there are wide variety of database-searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid-search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection--referred to as STEPS--utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true-positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. A Multiobjective Interval Programming Model for Wind-Hydrothermal Power System Dispatching Using 2-Step Optimization Algorithm

    Science.gov (United States)

    Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  4. The effect of topography on the choice of optimal step intermediate supports along the line of the cable metro

    Directory of Open Access Journals (Sweden)

    Lagerev A.V.

    2017-09-01

    differences in altitude along the length of the route of transport: only if the surface inclination of more than 50...60 degrees cost of 1 km line starts to exceed the cost of 1 km line laid along a strictly horizontal surface. At small angles of inclination of the surface (less than 6...8 degrees to min-imize the cost of construction of the cable metro line requires more frequent installation of intermediate supports. How-ever, despite the possibility of building a lower and cheaper supports, the observed increased values of the cost of 1 km of optimum option of the line. At large angles of inclination of the surface a need for a more rare setup higher interme-diate supports. Within the tilt angle 10...60 degrees the range of variation of the optimal step is small enough, amount-ing to no more than ±10 % from the value of the step at an angle of inclination of the surface 10 degree. In the range of small angles of inclination of the terrain surface (3...6 degrees observed abrupt changes in the basic technical and eco-nomic characteristics of the cable metro lines. This is due to the change of the forms sagging of load-bearing ropes. The increase in the number of supporting ropes has a very small economic effect (within 4 %. It does not change the opti-mal characteristics of such line, as a step of the installation and the height of intermediate supports, shape and sagging boom carrying ropes. However, the significantly reduced diameter of the supporting rope and the horizontal force ten-sion. The increase in aggregate strength of carrying ropes provides a directly proportional increase in the optimal step installation of intermediate supports and leads to a marked decrease in the cost of construction of the cable metro lines. At the height of intermediate supports and the horizontal force of the tension carrying ropes of their aggregate strength influence practically does not, however, leads to a significant reduction in the required diameter of the supporting ropes. It

  5. Multi-step optimization strategy for fuel-optimal orbital transfer of low-thrust spacecraft

    Science.gov (United States)

    Rasotto, M.; Armellin, R.; Di Lizia, P.

    2016-03-01

    An effective method for the design of fuel-optimal transfers in two- and three-body dynamics is presented. The optimal control problem is formulated using calculus of variation and primer vector theory. This leads to a multi-point boundary value problem (MPBVP), characterized by complex inner constraints and a discontinuous thrust profile. The first issue is addressed by embedding the MPBVP in a parametric optimization problem, thus allowing a simplification of the set of transversality constraints. The second problem is solved by representing the discontinuous control function by a smooth function depending on a continuation parameter. The resulting trajectory optimization method can deal with different intermediate conditions, and no a priori knowledge of the control structure is required. Test cases in both the two- and three-body dynamics show the capability of the method in solving complex trajectory design problems.

  6. Instant PageSpeed optimization

    CERN Document Server

    Jaiswal, Sanjeev

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Instant PageSpeed Optimization is a hands-on guide that provides a number of clear, step-by-step exercises for optimizing your websites for better performance and improving their efficiency.Instant PageSpeed Optimization is aimed at website developers and administrators who wish to make their websites load faster without any errors and consume less bandwidth. It's assumed that you will have some experience in basic web technologies like HTML, CSS3, JavaScript, and the basics of netw

  7. Optimization of One-Step In Situ Transesterification Method for Accurate Quantification of EPA in Nannochloropsis gaditana

    Directory of Open Access Journals (Sweden)

    Yuting Tang

    2016-11-01

    Full Text Available Microalgae are a valuable source of lipid feedstocks for biodiesel and valuable omega-3 fatty acids. Nannochloropsis gaditana has emerged as a promising producer of eicosapentaenoic acid (EPA due to its fast growth rate and high EPA content. In the present study, the fatty acid profile of Nannochloropsis gaditana was found to be naturally high in EPA and devoid of docosahexaenoic acid (DHA, thereby providing an opportunity to maximize the efficacy of EPA production. Using an optimized one-step in situ transesterification method (methanol:biomass = 90 mL/g; HCl 5% by vol.; 70 °C; 1.5 h, the maximum fatty acid methyl ester (FAME yield of Nannochloropsis gaditana cultivated under rich condition was quantified as 10.04% ± 0.08% by weight with EPA-yields as high as 4.02% ± 0.17% based on dry biomass. The total FAME and EPA yields were 1.58- and 1.23-fold higher separately than that obtained using conventional two-step method (solvent system: methanol and chloroform. This one-step in situ method provides a fast and simple method to measure fatty acid methyl ester (FAME yields and could serve as a promising method to generate eicosapentaenoic acid methyl ester from microalgae.

  8. Simulation-based optimization parametric optimization techniques and reinforcement learning

    CERN Document Server

    Gosavi, Abhijit

    2003-01-01

    Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization. The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work. Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are: *An accessible introduction to reinforcement learning and parametric-optimization techniques. *A step-by-step description of several algorithms of simulation-based optimization. *A clear and simple introduction to the methodology of neural networks. *A gentle introduction to converg...

  9. Leaf position optimization for step-and-shoot IMRT

    International Nuclear Information System (INIS)

    Gersem, Werner de; Claus, Filip; Wagter, Carlos de; Duyse, Bart van; Neve, Wilfried de

    2001-01-01

    Purpose: To describe the theoretical basis, the algorithm, and implementation of a tool that optimizes segment shapes and weights for step-and-shoot intensity-modulated radiation therapy delivered by multileaf collimators. Methods and Materials: The tool, called SOWAT (Segment Outline and Weight Adapting Tool) is applied to a set of segments, segment weights, and corresponding dose distribution, computed by an external dose computation engine. SOWAT evaluates the effects of changing the position of each collimating leaf of each segment on an objective function, as follows. Changing a leaf position causes a change in the segment-specific dose matrix, which is calculated by a fast dose computation algorithm. A weighted sum of all segment-specific dose matrices provides the dose distribution and allows computation of the value of the objective function. Only leaf position changes that comply with the multileaf collimator constraints are evaluated. Leaf position changes that tend to decrease the value of the objective function are retained. After several possible positions have been evaluated for all collimating leaves of all segments, an external dose engine recomputes the dose distribution, based on the adapted leaf positions and weights. The plan is evaluated. If the plan is accepted, a segment sequencer is used to make the prescription files for the treatment machine. Otherwise, the user can restart SOWAT using the new set of segments, segment weights, and corresponding dose distribution. The implementation was illustrated using two example cases. The first example is a T1N0M0 supraglottic cancer case that was distributed as a multicenter planning exercise by investigators from Rotterdam, The Netherlands. The exercise involved a two-phase plan. Phase 1 involved the delivery of 46 Gy to a concave-shaped planning target volume (PTV) consisting of the primary tumor volume and the elective lymph nodal regions II-IV on both sides of the neck. Phase 2 involved a boost of

  10. Five-step neck lift: integrating anatomy with clinical practice to optimize results.

    Science.gov (United States)

    Narasimhan, Kailash; Stuzin, James M; Rohrich, Rod J

    2013-08-01

    A harmonious and youthful appearing neckline is arguably the most vital aspect of a successful facial rejuveation. Without sound principles, the neck appears skeletonized, tethered, and hollow. The anatomical studies that the authors have performed regarding the neck, jowl, and subplatysmal elements have influenced the techniques that they now use. The authors' approach modifies the classic techniques of the past, and seeks a nuanced approach to each patient by resuspension and reshaping of deeper neck elements. In this article, the authors apply their anatomical research and cadaveric studies to demonstrate and support their neck-lift techniques. The authors integrate their knowledge to describe how the technique of one of the senior authors (R.J.R.) has evolved over time. The main tenets of the authors' approach have evolved into a sequence that involves skin undermining over the neck and cheek, submental access to the neck, with possible excision of fat and midline plication of the platysma with release of the muscle inferiorly, platysmal window suspension laterally, precise release of the mandibular septum and ligament if needed, and finally redraping of the superficial musculoaponeurotic system (SMAS) by plication or SMASecomy. These five steps ensure correction of jowling, a smooth jawline, and a well-shaped neck. The five-step neck lift helps to optimize results in creating the ideal neck contour. The authors provide four points that should be considered in any neck-lift procedure. The end result is a well-defined, well-contoured neck, with an approach grounded in sound anatomical principles.

  11. An optimized fed-batch culture strategy integrated with a one-step fermentation improves L-lactic acid production by Rhizopus oryzae.

    Science.gov (United States)

    Fu, Yongqian; Sun, Xiaolong; Zhu, Huayue; Jiang, Ru; Luo, Xi; Yin, Longfei

    2018-05-21

    In previous work, we proposed a novel modified one-step fermentation fed-batch strategy to efficiently generate L-lactic acid (L-LA) using Rhizopus oryzae. In this study, to further enhance efficiency of L-LA production through one-step fermentation in fed-batch cultures, we systematically investigated the initial peptone- and glucose-feeding approaches, including different initial peptone and glucose concentrations and maintained residual glucose levels. Based on the results of this study, culturing R. oryzae with initial peptone and glucose concentrations of 3.0 and 50.0 g/l, respectively, using a fed-batch strategy is an effective approach of producing L-LA through one-step fermentation. Changing the residual glucose had no obvious effect on the generation of L-LA. We determined the maximum LA production and productivity to be 162 g/l and 6.23 g/(l·h), respectively, during the acid production stage. Compared to our previous work, there was almost no change in L-LA production or yield; however, the productivity of L-LA increased by 14.3%.

  12. Qualitative and quantitative assessment of step size adaptation rules

    DEFF Research Database (Denmark)

    Krause, Oswin; Glasmachers, Tobias; Igel, Christian

    2017-01-01

    We present a comparison of step size adaptation methods for evolution strategies, covering recent developments in the field. Following recent work by Hansen et al. we formulate a concise list of performance criteria: a) fast convergence of the mean, b) near-optimal fixed point of the normalized s...... that cumulative step size adaptation (CSA) and twopoint adaptation (TPA) provide reliable estimates of the optimal step size. We further find that removing the evolution path of CSA still leads to a reliable algorithm without the computational requirements of CSA.......We present a comparison of step size adaptation methods for evolution strategies, covering recent developments in the field. Following recent work by Hansen et al. we formulate a concise list of performance criteria: a) fast convergence of the mean, b) near-optimal fixed point of the normalized...

  13. Optimal leaf sequencing with elimination of tongue-and-groove underdosage

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, Srijit [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Sahni, Sartaj [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Palta, Jatinder [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Ranka, Sanjay [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Li, Jonathan [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States)

    2004-02-07

    The individual leaves of a multileaf collimator (MLC) have a tongue-and-groove or stepped-edge design to minimize leakage radiation between adjacent leaves. This design element has a drawback in that it creates areas of underdosages in intensity-modulated photon beams unless a leaf trajectory is specifically designed such that for any two adjacent leaf pairs, the direct exposure under the tongue-and-groove is equal to the lower of the direct exposures of the leaf pairs. In this work, we present a systematic study of the optimization of a leaf sequencing algorithm for segmental multileaf collimator beam delivery that completely eliminates areas of underdosages due to tongue-and-groove or stepped-edge design of the MLC. Simultaneous elimination of tongue-and-groove effect and leaf interdigitation is also studied. This is an extension of our previous work (Kamath et al 2003a Phys. Med. Biol. 48 307) in which we described a leaf sequencing algorithm that is optimal for monitor unit (MU) efficiency under most common leaf movement constraints that include minimum leaf separation. Compared to our previously published algorithm (without constraints), the new algorithms increase the number of sub-fields by approximately 21% and 25%, respectively, but are optimal in MU efficiency for unidirectional schedules. (note)

  14. Optimal leaf sequencing with elimination of tongue-and-groove underdosage

    International Nuclear Information System (INIS)

    Kamath, Srijit; Sahni, Sartaj; Palta, Jatinder; Ranka, Sanjay; Li, Jonathan

    2004-01-01

    The individual leaves of a multileaf collimator (MLC) have a tongue-and-groove or stepped-edge design to minimize leakage radiation between adjacent leaves. This design element has a drawback in that it creates areas of underdosages in intensity-modulated photon beams unless a leaf trajectory is specifically designed such that for any two adjacent leaf pairs, the direct exposure under the tongue-and-groove is equal to the lower of the direct exposures of the leaf pairs. In this work, we present a systematic study of the optimization of a leaf sequencing algorithm for segmental multileaf collimator beam delivery that completely eliminates areas of underdosages due to tongue-and-groove or stepped-edge design of the MLC. Simultaneous elimination of tongue-and-groove effect and leaf interdigitation is also studied. This is an extension of our previous work (Kamath et al 2003a Phys. Med. Biol. 48 307) in which we described a leaf sequencing algorithm that is optimal for monitor unit (MU) efficiency under most common leaf movement constraints that include minimum leaf separation. Compared to our previously published algorithm (without constraints), the new algorithms increase the number of sub-fields by approximately 21% and 25%, respectively, but are optimal in MU efficiency for unidirectional schedules. (note)

  15. Development of single step RT-PCR for detection of Kyasanur forest disease virus from clinical samples

    Directory of Open Access Journals (Sweden)

    Gouri Chaubal

    2018-02-01

    Discussion and conclusion: The previously published sensitive real time RT-PCR assay requires higher cost in terms of reagents and machine setup and technical expertise has been the primary reason for development of this assay. A single step RT-PCR is relatively easy to perform and more cost effective than real time RT-PCR in smaller setups in the absence of Biosafety Level-3 facility. This study reports the development and optimization of single step RT-PCR assay which is more sensitive and less time-consuming than nested RT-PCR and cost effective for rapid diagnosis of KFD viral RNA.

  16. Biomechanical influences on balance recovery by stepping.

    Science.gov (United States)

    Hsiao, E T; Robinovitch, S N

    1999-10-01

    Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.

  17. Parameter Estimations and Optimal Design of Simple Step-Stress Model for Gamma Dual Weibull Distribution

    Directory of Open Access Journals (Sweden)

    Hamdy Mohamed Salem

    2018-03-01

    Full Text Available This paper considers life-testing experiments and how it is effected by stress factors: namely temperature, electricity loads, cycling rate and pressure. A major type of accelerated life tests is a step-stress model that allows the experimenter to increase stress levels more than normal use during the experiment to see the failure items. The test items are assumed to follow Gamma Dual Weibull distribution. Different methods for estimating the parameters are discussed. These include Maximum Likelihood Estimations and Confidence Interval Estimations which is based on asymptotic normality generate narrow intervals to the unknown distribution parameters with high probability. MathCAD (2001 program is used to illustrate the optimal time procedure through numerical examples.

  18. Comparative analysis of single-step and two-step biodiesel production using supercritical methanol on laboratory-scale

    International Nuclear Information System (INIS)

    Micic, Radoslav D.; Tomić, Milan D.; Kiss, Ferenc E.; Martinovic, Ferenc L.; Simikić, Mirko Ð.; Molnar, Tibor T.

    2016-01-01

    Highlights: • Single-step supercritical transesterification compared to the two-step process. • Two-step process: oil hydrolysis and subsequent supercritical methyl esterification. • Experiments were conducted in a laboratory-scale batch reactor. • Higher biodiesel yields in two-step process at milder reaction conditions. • Two-step process has potential to be cost-competitive with the single-step process. - Abstract: Single-step supercritical transesterification and two-step biodiesel production process consisting of oil hydrolysis and subsequent supercritical methyl esterification were studied and compared. For this purpose, comparative experiments were conducted in a laboratory-scale batch reactor and optimal reaction conditions (temperature, pressure, molar ratio and time) were determined. Results indicate that in comparison to a single-step transesterification, methyl esterification (second step of the two-step process) produces higher biodiesel yields (95 wt% vs. 91 wt%) at lower temperatures (270 °C vs. 350 °C), pressures (8 MPa vs. 12 MPa) and methanol to oil molar ratios (1:20 vs. 1:42). This can be explained by the fact that the reaction system consisting of free fatty acid (FFA) and methanol achieves supercritical condition at milder reaction conditions. Furthermore, the dissolved FFA increases the acidity of supercritical methanol and acts as an acid catalyst that increases the reaction rate. There is a direct correlation between FFA content of the product obtained in hydrolysis and biodiesel yields in methyl esterification. Therefore, the reaction parameters of hydrolysis were optimized to yield the highest FFA content at 12 MPa, 250 °C and 1:20 oil to water molar ratio. Results of direct material and energy costs comparison suggest that the process based on the two-step reaction has the potential to be cost-competitive with the process based on single-step supercritical transesterification. Higher biodiesel yields, similar or lower energy

  19. An online replanning method using warm start optimization and aperture morphing for flattening-filter-free beams

    Energy Technology Data Exchange (ETDEWEB)

    Ahunbay, Ergun E., E-mail: eahunbay@mcw.edu; Ates, O.; Li, X. A. [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin 53226 (United States)

    2016-08-15

    Purpose: In a situation where a couch shift for patient positioning is not preferred or prohibited (e.g., MR-linac), segment aperture morphing (SAM) can address target dislocation and deformation. For IMRT/VMAT with flattening-filter-free (FFF) beams, however, SAM method would lead to an adverse translational dose effect due to the beam unflattening. Here the authors propose a new two-step process to address both the translational effect of FFF beams and the target deformation. Methods: The replanning method consists of an offline and an online step. The offline step is to create a series of preshifted-plans (PSPs) obtained by a so-called “warm start” optimization (starting optimization from the original plan, rather than from scratch) at a series of isocenter shifts. The PSPs all have the same number of segments with very similar shapes, since the warm start optimization only adjusts the MLC positions instead of regenerating them. In the online step, a new plan is obtained by picking the closest PSP or linearly interpolating the MLC positions and the monitor units of the closest PSPs for the shift determined from the image of the day. This two-step process is completely automated and almost instantaneous (no optimization or dose calculation needed). The previously developed SAM algorithm is then applied for daily deformation. The authors tested the method on sample prostate and pancreas cases. Results: The two-step interpolation method can account for the adverse dose effects from FFF beams, while SAM corrects for the target deformation. Plan interpolation method is effective in diminishing the unflat beam effect and may allow reducing the required number of PSPs. The whole process takes the same time as the previously reported SAM process (5–10 min). Conclusions: The new two-step method plus SAM can address both the translation effects of FFF beams and target deformation, and can be executed in full automation except the delineation of target contour

  20. An online replanning method using warm start optimization and aperture morphing for flattening-filter-free beams

    International Nuclear Information System (INIS)

    Ahunbay, Ergun E.; Ates, O.; Li, X. A.

    2016-01-01

    Purpose: In a situation where a couch shift for patient positioning is not preferred or prohibited (e.g., MR-linac), segment aperture morphing (SAM) can address target dislocation and deformation. For IMRT/VMAT with flattening-filter-free (FFF) beams, however, SAM method would lead to an adverse translational dose effect due to the beam unflattening. Here the authors propose a new two-step process to address both the translational effect of FFF beams and the target deformation. Methods: The replanning method consists of an offline and an online step. The offline step is to create a series of preshifted-plans (PSPs) obtained by a so-called “warm start” optimization (starting optimization from the original plan, rather than from scratch) at a series of isocenter shifts. The PSPs all have the same number of segments with very similar shapes, since the warm start optimization only adjusts the MLC positions instead of regenerating them. In the online step, a new plan is obtained by picking the closest PSP or linearly interpolating the MLC positions and the monitor units of the closest PSPs for the shift determined from the image of the day. This two-step process is completely automated and almost instantaneous (no optimization or dose calculation needed). The previously developed SAM algorithm is then applied for daily deformation. The authors tested the method on sample prostate and pancreas cases. Results: The two-step interpolation method can account for the adverse dose effects from FFF beams, while SAM corrects for the target deformation. Plan interpolation method is effective in diminishing the unflat beam effect and may allow reducing the required number of PSPs. The whole process takes the same time as the previously reported SAM process (5–10 min). Conclusions: The new two-step method plus SAM can address both the translation effects of FFF beams and target deformation, and can be executed in full automation except the delineation of target contour

  1. An online replanning method using warm start optimization and aperture morphing for flattening-filter-free beams.

    Science.gov (United States)

    Ahunbay, Ergun E; Ates, O; Li, X A

    2016-08-01

    In a situation where a couch shift for patient positioning is not preferred or prohibited (e.g., MR-linac), segment aperture morphing (SAM) can address target dislocation and deformation. For IMRT/VMAT with flattening-filter-free (FFF) beams, however, SAM method would lead to an adverse translational dose effect due to the beam unflattening. Here the authors propose a new two-step process to address both the translational effect of FFF beams and the target deformation. The replanning method consists of an offline and an online step. The offline step is to create a series of preshifted-plans (PSPs) obtained by a so-called "warm start" optimization (starting optimization from the original plan, rather than from scratch) at a series of isocenter shifts. The PSPs all have the same number of segments with very similar shapes, since the warm start optimization only adjusts the MLC positions instead of regenerating them. In the online step, a new plan is obtained by picking the closest PSP or linearly interpolating the MLC positions and the monitor units of the closest PSPs for the shift determined from the image of the day. This two-step process is completely automated and almost instantaneous (no optimization or dose calculation needed). The previously developed SAM algorithm is then applied for daily deformation. The authors tested the method on sample prostate and pancreas cases. The two-step interpolation method can account for the adverse dose effects from FFF beams, while SAM corrects for the target deformation. Plan interpolation method is effective in diminishing the unflat beam effect and may allow reducing the required number of PSPs. The whole process takes the same time as the previously reported SAM process (5-10 min). The new two-step method plus SAM can address both the translation effects of FFF beams and target deformation, and can be executed in full automation except the delineation of target contour required by the SAM process.

  2. An efficient one-step condensation and activation strategy to synthesize porous carbons with optimal micropore sizes for highly selective CO₂ adsorption.

    Science.gov (United States)

    Wang, Jiacheng; Liu, Qian

    2014-04-21

    A series of microporous carbons (MPCs) were successfully prepared by an efficient one-step condensation and activation strategy using commercially available dialdehyde and diamine as carbon sources. The resulting MPCs have large surface areas (up to 1881 m(2) g(-1)), micropore volumes (up to 0.78 cm(3) g(-1)), and narrow micropore size distributions (0.7-1.1 nm). The CO₂ uptakes of the MPCs prepared at high temperatures (700-750 °C) are higher than those prepared under mild conditions (600-650 °C), because the former samples possess optimal micropore sizes (0.7-0.8 nm) that are highly suitable for CO₂ capture due to enhanced adsorbate-adsorbent interactions. At 1 bar, MPC-750 prepared at 750 °C demonstrates the best CO₂ capture performance and can efficiently adsorb CO₂ molecules at 2.86 mmol g(-1) and 4.92 mmol g(-1) at 25 and 0 °C, respectively. In particular, the MPCs with optimal micropore sizes (0.7-0.8 nm) have extremely high CO₂/N₂ adsorption ratios (47 and 52 at 25 and 0 °C, respectively) at 1 bar, and initial CO₂/N₂ adsorption selectivities of up to 81 and 119 at 25 °C and 0 °C, respectively, which are far superior to previously reported values for various porous solids. These excellent results, combined with good adsorption capacities and efficient regeneration/recyclability, make these carbons amongst the most promising sorbents reported so far for selective CO₂ adsorption in practical applications.

  3. Joint optimization of algorithmic suites for EEG analysis.

    Science.gov (United States)

    Santana, Eder; Brockmeier, Austin J; Principe, Jose C

    2014-01-01

    Electroencephalogram (EEG) data analysis algorithms consist of multiple processing steps each with a number of free parameters. A joint optimization methodology can be used as a wrapper to fine-tune these parameters for the patient or application. This approach is inspired by deep learning neural network models, but differs because the processing layers for EEG are heterogeneous with different approaches used for processing space and time. Nonetheless, we treat the processing stages as a neural network and apply backpropagation to jointly optimize the parameters. This approach outperforms previous results on the BCI Competition II - dataset IV; additionally, it outperforms the common spatial patterns (CSP) algorithm on the BCI Competition III dataset IV. In addition, the optimized parameters in the architecture are still interpretable.

  4. Multiple time step molecular dynamics in the optimized isokinetic ensemble steered with the molecular theory of solvation: Accelerating with advanced extrapolation of effective solvation forces

    International Nuclear Information System (INIS)

    Omelyan, Igor; Kovalenko, Andriy

    2013-01-01

    We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics

  5. Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size

    KAUST Repository

    Hadjimichael, Yiannis

    2016-09-08

    Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order two and three) with variable step size, and prove their optimality, stability, and convergence. The choice of step size for multistep SSP methods is an interesting problem because the allowable step size depends on the SSP coefficient, which in turn depends on the chosen step sizes. The description of the methods includes an optimal step-size strategy. We prove sharp upper bounds on the allowable step size for explicit SSP linear multistep methods and show the existence of methods with arbitrarily high order of accuracy. The effectiveness of the methods is demonstrated through numerical examples.

  6. Optimization Strategies for Hardware-Based Cofactorization

    Science.gov (United States)

    Loebenberger, Daniel; Putzka, Jens

    We use the specific structure of the inputs to the cofactorization step in the general number field sieve (GNFS) in order to optimize the runtime for the cofactorization step on a hardware cluster. An optimal distribution of bitlength-specific ECM modules is proposed and compared to existing ones. With our optimizations we obtain a speedup between 17% and 33% of the cofactorization step of the GNFS when compared to the runtime of an unoptimized cluster.

  7. Expression microarray reproducibility is improved by optimising purification steps in RNA amplification and labelling

    Directory of Open Access Journals (Sweden)

    Brenton James D

    2004-01-01

    Full Text Available Abstract Background Expression microarrays have evolved into a powerful tool with great potential for clinical application and therefore reliability of data is essential. RNA amplification is used when the amount of starting material is scarce, as is frequently the case with clinical samples. Purification steps are critical in RNA amplification and labelling protocols, and there is a lack of sufficient data to validate and optimise the process. Results Here the purification steps involved in the protocol for indirect labelling of amplified RNA are evaluated and the experimentally determined best method for each step with respect to yield, purity, size distribution of the transcripts, and dye coupling is used to generate targets tested in replicate hybridisations. DNase treatment of diluted total RNA samples followed by phenol extraction is the optimal way to remove genomic DNA contamination. Purification of double-stranded cDNA is best achieved by phenol extraction followed by isopropanol precipitation at room temperature. Extraction with guanidinium-phenol and Lithium Chloride precipitation are the optimal methods for purification of amplified RNA and labelled aRNA respectively. Conclusion This protocol provides targets that generate highly reproducible microarray data with good representation of transcripts across the size spectrum and a coefficient of repeatability significantly better than that reported previously.

  8. Step-to-step reproducibility and asymmetry to study gait auto-optimization in healthy and cerebral palsied subjects.

    Science.gov (United States)

    Descatoire, A; Femery, V; Potdevin, F; Moretto, P

    2009-05-01

    The purpose of our study was to compare plantar pressure asymmetry and step-to-step reproducibility in both able-bodied persons and two groups of hemiplegics. The relevance of the research was to determine the efficiency of asymmetry and reproducibility as indexes for diagnosis and rehabilitation processes. This study comprised 31 healthy young subjects and 20 young subjects suffering from cerebral palsy hemiplegia assigned to two groups of 10 subjects according to the severity of their musculoskeletal disorders. The peaks of plantar pressure and the time to peak pressure were recorded with an in-shoe measurement system. The intra-individual coefficient of variability was calculated to indicate the consistency of plantar pressure during walking and to define gait stability. The effect size was computed to quantify the asymmetry and measurements were conducted at eight footprint locations. Results indicated few differences in step-to-step reproducibility between the healthy group and the less spastic group while the most affected group showed a more asymmetrical and unstable gait. From the concept of self-optimisation and depending on the neuromotor disorders the organism could make priorities based on pain, mobility, stability or energy expenditure to develop the best gait auto-optimisation.

  9. Rotor cascade shape optimization with unsteady passing wakes using implicit dual time stepping method

    Science.gov (United States)

    Lee, Eun Seok

    2000-10-01

    An improved aerodynamics performance of a turbine cascade shape can be achieved by an understanding of the flow-field associated with the stator-rotor interaction. In this research, an axial gas turbine airfoil cascade shape is optimized for improved aerodynamic performance by using an unsteady Navier-Stokes solver and a parallel genetic algorithm. The objective of the research is twofold: (1) to develop a computational fluid dynamics code having faster convergence rate and unsteady flow simulation capabilities, and (2) to optimize a turbine airfoil cascade shape with unsteady passing wakes for improved aerodynamic performance. The computer code solves the Reynolds averaged Navier-Stokes equations. It is based on the explicit, finite difference, Runge-Kutta time marching scheme and the Diagonalized Alternating Direction Implicit (DADI) scheme, with the Baldwin-Lomax algebraic and k-epsilon turbulence modeling. Improvements in the code focused on the cascade shape design capability, convergence acceleration and unsteady formulation. First, the inverse shape design method was implemented in the code to provide the design capability, where a surface transpiration concept was employed as an inverse technique to modify the geometry satisfying the user specified pressure distribution on the airfoil surface. Second, an approximation storage multigrid method was implemented as an acceleration technique. Third, the preconditioning method was adopted to speed up the convergence rate in solving the low Mach number flows. Finally, the implicit dual time stepping method was incorporated in order to simulate the unsteady flow-fields. For the unsteady code validation, the Stokes's 2nd problem and the Poiseuille flow were chosen and compared with the computed results and analytic solutions. To test the code's ability to capture the natural unsteady flow phenomena, vortex shedding past a cylinder and the shock oscillation over a bicircular airfoil were simulated and compared with

  10. One step beyond: Different step-to-step transitions exist during continuous contact brachiation in siamangs

    Directory of Open Access Journals (Sweden)

    Fana Michilsens

    2012-02-01

    In brachiation, two main gaits are distinguished, ricochetal brachiation and continuous contact brachiation. During ricochetal brachiation, a flight phase exists and the body centre of mass (bCOM describes a parabolic trajectory. For continuous contact brachiation, where at least one hand is always in contact with the substrate, we showed in an earlier paper that four step-to-step transition types occur. We referred to these as a ‘point’, a ‘loop’, a ‘backward pendulum’ and a ‘parabolic’ transition. Only the first two transition types have previously been mentioned in the existing literature on gibbon brachiation. In the current study, we used three-dimensional video and force analysis to describe and characterize these four step-to-step transition types. Results show that, although individual preference occurs, the brachiation strides characterized by each transition type are mainly associated with speed. Yet, these four transitions seem to form a continuum rather than four distinct types. Energy recovery and collision fraction are used as estimators of mechanical efficiency of brachiation and, remarkably, these parameters do not differ between strides with different transition types. All strides show high energy recoveries (mean  = 70±11.4% and low collision fractions (mean  = 0.2±0.13, regardless of the step-to-step transition type used. We conclude that siamangs have efficient means of modifying locomotor speed during continuous contact brachiation by choosing particular step-to-step transition types, which all minimize collision fraction and enhance energy recovery.

  11. Preimages for Step-Reduced SHA-2

    DEFF Research Database (Denmark)

    Aoki, Kazumaro; Guo, Jian; Matusiewicz, Krystian

    2009-01-01

    In this paper, we present preimage attacks on up to 43-step SHA-256 (around 67% of the total 64 steps) and 46-step SHA-512 (around 57.5% of the total 80 steps), which significantly increases the number of attacked steps compared to the best previously published preimage attack working for 24 steps....... The time complexities are 2^251.9, 2^509 for finding pseudo-preimages and 2^254.9, 2^511.5 compression function operations for full preimages. The memory requirements are modest, around 2^6 words for 43-step SHA-256 and 46-step SHA-512. The pseudo-preimage attack also applies to 43-step SHA-224 and SHA-384...

  12. SU-E-T-23: A Novel Two-Step Optimization Scheme for Tandem and Ovoid (T and O) HDR Brachytherapy Treatment for Locally Advanced Cervical Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, M; Todor, D [Virginia Commonwealth University, Richmond, VA (United States); Fields, E [Virginia Commonwealth University, Richmond, Virginia (United States)

    2014-06-01

    Purpose: To present a novel method allowing fast, true volumetric optimization of T and O HDR treatments and to quantify its benefits. Materials and Methods: 27 CT planning datasets and treatment plans from six consecutive cervical cancer patients treated with 4–5 intracavitary T and O insertions were used. Initial treatment plans were created with a goal of covering high risk (HR)-CTV with D90 > 90% and minimizing D2cc to rectum, bladder and sigmoid with manual optimization, approved and delivered. For the second step, each case was re-planned adding a new structure, created from the 100% prescription isodose line of the manually optimized plan to the existent physician delineated HR-CTV, rectum, bladder and sigmoid. New, more rigorous DVH constraints for the critical OARs were used for the optimization. D90 for the HR-CTV and D2cc for OARs were evaluated in both plans. Results: Two-step optimized plans had consistently smaller D2cc's for all three OARs while preserving good D90s for HR-CTV. On plans with “excellent” CTV coverage, average D90 of 96% (range 91–102), sigmoid D2cc was reduced on average by 37% (range 16–73), bladder by 28% (range 20–47) and rectum by 27% (range 15–45). Similar reductions were obtained on plans with “good” coverage, with an average D90 of 93% (range 90–99). For plans with inferior coverage, average D90 of 81%, an increase in coverage to 87% was achieved concurrently with D2cc reductions of 31%, 18% and 11% for sigmoid, bladder and rectum. Conclusions: A two-step DVH-based optimization can be added with minimal planning time increase, but with the potential of dramatic and systematic reductions of D2cc for OARs and in some cases with concurrent increases in target dose coverage. These single-fraction modifications would be magnified over the course of 4–5 intracavitary insertions and may have real clinical implications in terms of decreasing both acute and late toxicity.

  13. Optimization in radiotherapy treatment planning thanks to a fast dose calculation method

    International Nuclear Information System (INIS)

    Yang, Mingchao

    2014-01-01

    This thesis deals with the radiotherapy treatments planning issue which need a fast and reliable treatment planning system (TPS). The TPS is composed of a dose calculation algorithm and an optimization method. The objective is to design a plan to deliver the dose to the tumor while preserving the surrounding healthy and sensitive tissues. The treatment planning aims to determine the best suited radiation parameters for each patient's treatment. In this thesis, the parameters of treatment with IMRT (Intensity modulated radiation therapy) are the beam angle and the beam intensity. The objective function is multi-criteria with linear constraints. The main objective of this thesis is to demonstrate the feasibility of a treatment planning optimization method based on a fast dose-calculation technique developed by (Blanpain, 2009). This technique proposes to compute the dose by segmenting the patient's phantom into homogeneous meshes. The dose computation is divided into two steps. The first step impacts the meshes: projections and weights are set according to physical and geometrical criteria. The second step impacts the voxels: the dose is computed by evaluating the functions previously associated to their mesh. A reformulation of this technique makes possible to solve the optimization problem by the gradient descent algorithm. The main advantage of this method is that the beam angle parameters could be optimized continuously in 3 dimensions. The obtained results in this thesis offer many opportunities in the field of radiotherapy treatment planning optimization. (author) [fr

  14. Segment-based dose optimization using a genetic algorithm

    International Nuclear Information System (INIS)

    Cotrutz, Cristian; Xing Lei

    2003-01-01

    Intensity modulated radiation therapy (IMRT) inverse planning is conventionally done in two steps. Firstly, the intensity maps of the treatment beams are optimized using a dose optimization algorithm. Each of them is then decomposed into a number of segments using a leaf-sequencing algorithm for delivery. An alternative approach is to pre-assign a fixed number of field apertures and optimize directly the shapes and weights of the apertures. While the latter approach has the advantage of eliminating the leaf-sequencing step, the optimization of aperture shapes is less straightforward than that of beamlet-based optimization because of the complex dependence of the dose on the field shapes, and their weights. In this work we report a genetic algorithm for segment-based optimization. Different from a gradient iterative approach or simulated annealing, the algorithm finds the optimum solution from a population of candidate plans. In this technique, each solution is encoded using three chromosomes: one for the position of the left-bank leaves of each segment, the second for the position of the right-bank and the third for the weights of the segments defined by the first two chromosomes. The convergence towards the optimum is realized by crossover and mutation operators that ensure proper exchange of information between the three chromosomes of all the solutions in the population. The algorithm is applied to a phantom and a prostate case and the results are compared with those obtained using beamlet-based optimization. The main conclusion drawn from this study is that the genetic optimization of segment shapes and weights can produce highly conformal dose distribution. In addition, our study also confirms previous findings that fewer segments are generally needed to generate plans that are comparable with the plans obtained using beamlet-based optimization. Thus the technique may have useful applications in facilitating IMRT treatment planning

  15. Optimizing the number of steps in learning tasks for complex skills.

    NARCIS (Netherlands)

    Nadolski, Rob; Kirschner, Paul A.; Van Merriënboer, Jeroen

    2007-01-01

    Background. Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimised for efficient and effective learning. Aim. The aim of the study is

  16. Is ad-hoc plan adaptation based on 2-Step IMRT feasible?

    International Nuclear Information System (INIS)

    Bratengeier, Klaus; Polat, Buelent; Gainey, Mark; Grewenig, Patricia; Meyer, Juergen; Flentje, Michael

    2009-01-01

    Background: The ability of a geometry-based method to expeditiously adapt a '2-Step' step and shoot IMRT plan was explored. Both changes of the geometry of target and organ at risk have to be balanced. A retrospective prostate planning study was performed to investigate the relative benefits of beam segment adaptation to the changes in target and organ at risk coverage. Methods: Four patients with six planning cases with extraordinarily large deformations of rectum and prostate were chosen for the study. A 9-field IMRT plan (A) using 2-Step IMRT segments was planned on an initial CT study. The plan had to fulfil all the requirements of a conventional high-quality step and shoot IMRT plan. To adapt to changes of the anatomy in a further CT data set, three approaches were considered: the original plan with optimized isocentre position (B), a newly optimized plan (C) and the original plan, adapted using the 2-Step IMRT optimization rules (D). DVH parameters were utilized for quantification of plan quality: D 99 for the CTV and the central planning target volume (PTV), D 95 for an outer PTV, V 95 , V 80 and V 50 for rectum and bladder. Results: The adapted plan (D) achieved almost the same target coverage as the newly optimized plan (C). Target coverage for plan B was poor and for the organs at risk, the rectum V 80 was slightly increased. The volume with more than 95% of the target dose (V 95 ) was 1.5 ± 1.5 cm 3 for the newly optimized plan (C), compared to 2.2 ± 1.3 cm 3 for the original plan (A) and 7.2 ± 4.8 cm 3 (B) on the first and the second CT, respectively. The adapted plan resulted in 4.3 ± 2.1 cm 3 (D), an intermediate dose load to the rectum. All other parameters were comparable for the newly optimized and the adapted plan. Conclusions: The first results for adaptation of interfractional changes using the 2-Step IMRT algorithm are encouraging. The plans were superior to plans with optimized isocentre position and only marginally inferior to a newly

  17. What Is the Correct Answer about The Dress' Colors? Investigating the Relation between Optimism, Previous Experience, and Answerability.

    Science.gov (United States)

    Karlsson, Bodil S A; Allwood, Carl Martin

    2016-01-01

    The Dress photograph, first displayed on the internet in 2015, revealed stunning individual differences in color perception. The aim of this study was to investigate if lay-persons believed that the question about The Dress colors was answerable. Past research has found that optimism is related to judgments of how answerable knowledge questions with controversial answers are (Karlsson et al., 2016). Furthermore, familiarity with a question can create a feeling of knowing the answer (Reder and Ritter, 1992). Building on these findings, 186 participants saw the photo of The Dress and were asked about the correct answer to the question about The Dress' colors (" blue and black," "white and gold," "other, namely…," or "there is no correct answer" ). Choice of the alternative "there is no correct answer" was interpreted as believing the question was not answerable. This answer was chosen more often by optimists and by people who reported they had not seen The Dress before. We also found that among participants who had seen The Dress photo before, 19%, perceived The Dress as "white and gold" but believed that the correct answer was "blue and black ." This, in analogy to previous findings about non-believed memories (Scoboria and Pascal, 2016), shows that people sometimes do not believe the colors they have perceived are correct. Our results suggest that individual differences related to optimism and previous experience may contribute to if the judgment of the individual perception of a photograph is enough to serve as a decision basis for valid conclusions about colors. Further research about color judgments under ambiguous circumstances could benefit from separating individual perceptual experience from beliefs about the correct answer to the color question. Including the option "there is no correct answer " may also be beneficial.

  18. Logic-based methods for optimization combining optimization and constraint satisfaction

    CERN Document Server

    Hooker, John

    2011-01-01

    A pioneering look at the fundamental role of logic in optimization and constraint satisfaction While recent efforts to combine optimization and constraint satisfaction have received considerable attention, little has been said about using logic in optimization as the key to unifying the two fields. Logic-Based Methods for Optimization develops for the first time a comprehensive conceptual framework for integrating optimization and constraint satisfaction, then goes a step further and shows how extending logical inference to optimization allows for more powerful as well as flexible

  19. Optimal Sequential Diagnostic Strategy Generation Considering Test Placement Cost for Multimode Systems

    Directory of Open Access Journals (Sweden)

    Shigang Zhang

    2015-10-01

    Full Text Available Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics.

  20. Optimal Sequential Diagnostic Strategy Generation Considering Test Placement Cost for Multimode Systems

    Science.gov (United States)

    Zhang, Shigang; Song, Lijun; Zhang, Wei; Hu, Zheng; Yang, Yongmin

    2015-01-01

    Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics. PMID:26457709

  1. Performance Optimization of a Solar-Driven Multi-Step Irreversible Brayton Cycle Based on a Multi-Objective Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmadi Mohammad Hosein

    2016-01-01

    Full Text Available An applicable approach for a multi-step regenerative irreversible Brayton cycle on the basis of thermodynamics and optimization of thermal efficiency and normalized output power is presented in this work. In the present study, thermodynamic analysis and a NSGA II algorithm are coupled to determine the optimum values of thermal efficiency and normalized power output for a Brayton cycle system. Moreover, three well-known decision-making methods are employed to indicate definite answers from the outputs gained from the aforementioned approach. Finally, with the aim of error analysis, the values of the average and maximum error of the results are also calculated.

  2. Mesh Denoising based on Normal Voting Tensor and Binary Optimization

    OpenAIRE

    Yadav, S. K.; Reitebuch, U.; Polthier, K.

    2016-01-01

    This paper presents a tensor multiplication based smoothing algorithm that follows a two step denoising method. Unlike other traditional averaging approaches, our approach uses an element based normal voting tensor to compute smooth surfaces. By introducing a binary optimization on the proposed tensor together with a local binary neighborhood concept, our algorithm better retains sharp features and produces smoother umbilical regions than previous approaches. On top of that, we provide a stoc...

  3. An optimized two-step derivatization method for analyzing diethylene glycol ozonation products using gas chromatography and mass spectrometry.

    Science.gov (United States)

    Yu, Ran; Duan, Lei; Jiang, Jingkun; Hao, Jiming

    2017-03-01

    The ozonation of hydroxyl compounds (e.g., sugars and alcohols) gives a broad range of products such as alcohols, aldehydes, ketones, and carboxylic acids. This study developed and optimized a two-step derivatization procedure for analyzing polar products of aldehydes and carboxylic acids from the ozonation of diethylene glycol (DEG) in a non-aqueous environment using gas chromatography-mass spectrometry. Experiments based on Central Composite Design with response surface methodology were carried out to evaluate the effects of derivatization variables and their interactions on the analysis. The most desirable derivatization conditions were reported, i.e., oximation was performed at room temperature overnight with the o-(2,3,4,5,6-pentafluorobenzyl) hydroxyl amine to analyte molar ratio of 6, silylation reaction temperature of 70°C, reaction duration of 70min, and N,O-bis(trimethylsilyl)-trifluoroacetamide volume of 12.5μL. The applicability of this optimized procedure was verified by analyzing DEG ozonation products in an ultrafine condensation particle counter simulation system. Copyright © 2016. Published by Elsevier B.V.

  4. GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2012-01-01

    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.

  5. Modified two-step emulsion solvent evaporation technique for fabricating biodegradable rod-shaped particles in the submicron size range.

    Science.gov (United States)

    Safari, Hanieh; Adili, Reheman; Holinstat, Michael; Eniola-Adefeso, Omolola

    2018-05-15

    Though the emulsion solvent evaporation (ESE) technique has been previously modified to produce rod-shaped particles, it cannot generate small-sized rods for drug delivery applications due to the inherent coupling and contradicting requirements for the formation versus stretching of droplets. The separation of the droplet formation from the stretching step should enable the creation of submicron droplets that are then stretched in the second stage by manipulation of the system viscosity along with the surface-active molecule and oil-phase solvent. A two-step ESE protocol is evaluated where oil droplets are formed at low viscosity followed by a step increase in the aqueous phase viscosity to stretch droplets. Different surface-active molecules and oil phase solvents were evaluated to optimize the yield of biodegradable PLGA rods. Rods were assessed for drug loading via an imaging agent and vascular-targeted delivery application via blood flow adhesion assays. The two-step ESE method generated PLGA rods with major and minor axis down to 3.2 µm and 700 nm, respectively. Chloroform and sodium metaphosphate was the optimal solvent and surface-active molecule, respectively, for submicron rod fabrication. Rods demonstrated faster release of Nile Red compared to spheres and successfully targeted an inflamed endothelium under shear flow in vitro and in vivo. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Two-step approach to the dynamics of coupled anharmonic oscillators

    International Nuclear Information System (INIS)

    Chung, N. N.; Chew, L. Y.

    2009-01-01

    We have further extended the two-step approach developed by Chung and Chew [N. N. Chung and L. Y. Chew, Phys. Rev. A 76, 032113 (2007)] to the solution of the quantum dynamics of general systems of N-coupled anharmonic oscillators. The idea is to employ an optimized basis set to represent the dynamical quantum states of these oscillator systems. The set is generated via the action of the optimized Bogoliubov transformed bosonic operators on the optimal squeezed vacuum product state. The procedure requires (i) applying the two-step approach to the eigendecomposition of the time evolution operator and (ii) transforming the representation of the initial state from the original to the optimal bases. We have applied the formalism to examine the dynamics of squeezing and entanglement of several anharmonic oscillator systems.

  7. A novel neutron energy spectrum unfolding code using particle swarm optimization

    International Nuclear Information System (INIS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-01-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code. - Highlights: • Introducing a novel method for neutron spectrum unfolding. • Implementation of a particle swarm optimization code for neutron unfolding. • Comparing results of the PSO code with those of recently published TGASU code. • Match results of the PSO code with those of TGASU code. • Greater convergence rate of implemented PSO code than TGASU code.

  8. Rapid decay of vacancy islands at step edges on Ag(111): step orientation dependence

    International Nuclear Information System (INIS)

    Shen, Mingmin; Thiel, P A; Jenks, Cynthia J; Evans, J W

    2010-01-01

    Previous work has established that vacancy islands or pits fill much more quickly when they are in contact with a step edge, such that the common boundary is a double step. The present work focuses on the effect of the orientation of that step, with two possibilities existing for a face centered cubic (111) surface: A- and B-type steps. We find that the following features can depend on the orientation: (1) the shapes of islands while they shrink; (2) whether the island remains attached to the step edge; and (3) the rate of filling. The first two effects can be explained by the different rates of adatom diffusion along the A- and B-steps that define the pit, enhanced by the different filling rates. The third observation-the difference in the filling rate itself-is explained within the context of the concerted exchange mechanism at the double step. This process is facile at all regular sites along B-steps, but only at kink sites along A-steps, which explains the different rates. We also observe that oxygen can greatly accelerate the decay process, although it has no apparent effect on an isolated vacancy island (i.e. an island that is not in contact with a step).

  9. Role of step stiffness and kinks in the relaxation of vicinal (001) with zigzag [110] steps

    Science.gov (United States)

    Mahjoub, B.; Hamouda, Ajmi BH.; Einstein, TL.

    2017-08-01

    We present a kinetic Monte Carlo study of the relaxation dynamics and steady state configurations of 〈110〉 steps on a vicinal (001) simple cubic surface. This system is interesting because 〈110〉 (fully kinked) steps have different elementary excitation energetics and favor step diffusion more than 〈100〉 (nominally straight) steps. In this study we show how this leads to different relaxation dynamics as well as to different steady state configurations, including that 2-bond breaking processes are rate determining for 〈110〉 steps in contrast to 3-bond breaking processes for 〈100〉-steps found in previous work [Surface Sci. 602, 3569 (2008)]. The analysis of the terrace-width distribution (TWD) shows a significant role of kink-generation-annihilation processes during the relaxation of steps: the kinetic of relaxation, toward the steady state, is much faster in the case of 〈110〉-zigzag steps, with a higher standard deviation of the TWD, in agreement with a decrease of step stiffness due to orientation. We conclude that smaller step stiffness leads inexorably to faster step dynamics towards the steady state. The step-edge anisotropy slows the relaxation of steps and increases the strength of step-step effective interactions.

  10. Gaussian process regression for geometry optimization

    Science.gov (United States)

    Denzel, Alexander; Kästner, Johannes

    2018-03-01

    We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.

  11. Controlling dental enamel-cavity ablation depth with optimized stepping parameters along the focal plane normal using a three axis, numerically controlled picosecond laser.

    Science.gov (United States)

    Yuan, Fusong; Lv, Peijun; Wang, Dangxiao; Wang, Lei; Sun, Yuchun; Wang, Yong

    2015-02-01

    The purpose of this study was to establish a depth-control method in enamel-cavity ablation by optimizing the timing of the focal-plane-normal stepping and the single-step size of a three axis, numerically controlled picosecond laser. Although it has been proposed that picosecond lasers may be used to ablate dental hard tissue, the viability of such a depth-control method in enamel-cavity ablation remains uncertain. Forty-two enamel slices with approximately level surfaces were prepared and subjected to two-dimensional ablation by a picosecond laser. The additive-pulse layer, n, was set to 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70. A three-dimensional microscope was then used to measure the ablation depth, d, to obtain a quantitative function relating n and d. Six enamel slices were then subjected to three dimensional ablation to produce 10 cavities, respectively, with additive-pulse layer and single-step size set to corresponding values. The difference between the theoretical and measured values was calculated for both the cavity depth and the ablation depth of a single step. These were used to determine minimum-difference values for both the additive-pulse layer (n) and single-step size (d). When the additive-pulse layer and the single-step size were set 5 and 45, respectively, the depth error had a minimum of 2.25 μm, and 450 μm deep enamel cavities were produced. When performing three-dimensional ablating of enamel with a picosecond laser, adjusting the timing of the focal-plane-normal stepping and the single-step size allows for the control of ablation-depth error to the order of micrometers.

  12. A new hybrid genetic algorithm for optimizing the single and multivariate objective functions

    Energy Technology Data Exchange (ETDEWEB)

    Tumuluru, Jaya Shankar [Idaho National Laboratory; McCulloch, Richard Chet James [Idaho National Laboratory

    2015-07-01

    In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the most improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.

  13. Optimization of wind farm micro-siting for complex terrain using greedy algorithm

    International Nuclear Information System (INIS)

    Song, M.X.; Chen, K.; He, Z.Y.; Zhang, X.

    2014-01-01

    An optimization approach based on greedy algorithm for optimization of wind farm micro-siting is presented. The key of optimizing wind farm micro-siting is the fast and accurate evaluation of the wake flow interactions of wind turbines. The virtual particle model is employed for wake flow simulation of wind turbines, which makes the present method applicable for non-uniform flow fields on complex terrains. In previous bionic optimization method, within each step of the optimization process, only the power output of the turbine that is being located or relocated is considered. To aim at the overall power output of the wind farm comprehensively, a dependent region technique is introduced to improve the estimation of power output during the optimization procedure. With the technique, the wake flow influences can be reduced more efficiently during the optimization procedure. During the optimization process, the turbine that is being added will avoid being affected other turbines, and avoid affecting other turbine in the meantime. The results from the numerical calculations demonstrate that the present method is effective for wind farm micro-siting on complex terrain, and it produces better solutions in less time than the previous bionic method. - Highlights: • Greedy algorithm is applied to wind farm micro-siting problem. • The present method is effective for optimization on complex terrains. • Dependent region is suggested to improve the evaluation of wake influences. • The present method has better performance than the bionic method

  14. Performance analysis and optimization of radiating fins with a step change in thickness and variable thermal conductivity by homotopy perturbation method

    Science.gov (United States)

    Arslanturk, Cihat

    2011-02-01

    Although tapered fins transfer more rate of heat per unit volume, they are not found in every practical application because of the difficulty in manufacturing and fabrications. Therefore, there is a scope to modify the geometry of a constant thickness fin in view of the less difficulty in manufacturing and fabrication as well as betterment of heat transfer rate per unit volume of the fin material. For the better utilization of fin material, it is proposed a modified geometry of new fin with a step change in thickness (SF) in the literature. In the present paper, the homotopy perturbation method has been used to evaluate the temperature distribution within the straight radiating fins with a step change in thickness and variable thermal conductivity. The temperature profile has an abrupt change in the temperature gradient where the step change in thickness occurs and thermal conductivity parameter describing the variation of thermal conductivity has an important role on the temperature profile and the heat transfer rate. The optimum geometry which maximizes the heat transfer rate for a given fin volume has been found. The derived condition of optimality gives an open choice to the designer.

  15. Optimally frugal foraging

    Science.gov (United States)

    Bénichou, O.; Bhat, U.; Krapivsky, P. L.; Redner, S.

    2018-02-01

    We introduce the frugal foraging model in which a forager performs a discrete-time random walk on a lattice in which each site initially contains S food units. The forager metabolizes one unit of food at each step and starves to death when it last ate S steps in the past. Whenever the forager eats, it consumes all food at its current site and this site remains empty forever (no food replenishment). The crucial property of the forager is that it is frugal and eats only when encountering food within at most k steps of starvation. We compute the average lifetime analytically as a function of the frugality threshold and show that there exists an optimal strategy, namely, an optimal frugality threshold k* that maximizes the forager lifetime.

  16. From quality control to quality systems in x-ray radiology. Step by step approach

    International Nuclear Information System (INIS)

    Gendrutis Morkunas; Julius Ziliukas

    2007-01-01

    Complete test of publication follows. Quality systems in x-ray radiology as in any area of medical exposure is an important tool of optimization of radiation protection. Creation of these systems is related with a number of problems: limited resources, lack of knowledge and experience, negative attitude of hospitals staff and administration, lack of advice from outside. Problems related with transitionary period might be softened by the step by step approach. The following steps might be indicated: providing information on quality systems to hospital staff and administration, simple quality control procedures done by outside experts in hospitals, preparation of quality related procedures by hospital staff, practical implementation of quality control procedures done by hospital staff, preparation of quality manual by hospital staff, its integration into common quality system of hospital (if it is available) and constant development, measurements of performance indicators (e.g., patients' doses) and introduction of corrective measures if necessary, dissemination of experience by expert organizations and more advanced hospitals. These steps are to be discussed in the presentations based on Lithuanian experience since 1998.

  17. Early surgery versus optimal current step-up practice for chronic pancreatitis (ESCAPE): design and rationale of a randomized trial.

    Science.gov (United States)

    Ahmed Ali, Usama; Issa, Yama; Bruno, Marco J; van Goor, Harry; van Santvoort, Hjalmar; Busch, Olivier R C; Dejong, Cornelis H C; Nieuwenhuijs, Vincent B; van Eijck, Casper H; van Dullemen, Hendrik M; Fockens, Paul; Siersema, Peter D; Gouma, Dirk J; van Hooft, Jeanin E; Keulemans, Yolande; Poley, Jan W; Timmer, Robin; Besselink, Marc G; Vleggaar, Frank P; Wilder-Smith, Oliver H; Gooszen, Hein G; Dijkgraaf, Marcel G W; Boermeester, Marja A

    2013-03-18

    In current practice, patients with chronic pancreatitis undergo surgical intervention in a late stage of the disease, when conservative treatment and endoscopic interventions have failed. Recent evidence suggests that surgical intervention early on in the disease benefits patients in terms of better pain control and preservation of pancreatic function. Therefore, we designed a randomized controlled trial to evaluate the benefits, risks and costs of early surgical intervention compared to the current stepwise practice for chronic pancreatitis. The ESCAPE trial is a randomized controlled, parallel, superiority multicenter trial. Patients with chronic pancreatitis, a dilated pancreatic duct (≥5 mm) and moderate pain and/or frequent flare-ups will be registered and followed monthly as potential candidates for the trial. When a registered patient meets the randomization criteria (i.e. need for opioid analgesics) the patient will be randomized to either early surgical intervention (group A) or optimal current step-up practice (group B). An expert panel of chronic pancreatitis specialists will oversee the assessment of eligibility and ensure that allocation to either treatment arm is possible. Patients in group A will undergo pancreaticojejunostomy or a Frey-procedure in case of an enlarged pancreatic head (≥4 cm). Patients in group B will undergo a step-up practice of optimal medical treatment, if needed followed by endoscopic interventions, and if needed followed by surgery, according to predefined criteria. Primary outcome is pain assessed with the Izbicki pain score during a follow-up of 18 months. Secondary outcomes include complications, mortality, total direct and indirect costs, quality of life, pancreatic insufficiency, alternative pain scales, length of hospital admission, number of interventions and pancreatitis flare-ups. For the sample size calculation we defined a minimal clinically relevant difference in the primary endpoint as a difference of at least

  18. Radiation protection optimization in the CAETITE industrial complex

    International Nuclear Information System (INIS)

    Azevedo Py Junior, D.; Figueiredo, N.; Dos Santos Dias, P.L.; Mantovani Lima, H.

    2002-01-01

    This paper presents, briefly, the radiation protection aspects of process, project and operation of the Caetite Industrial Complex, CIC. Planing priorities were to minimize Environmental Radiological Impact and Occupational Radiological Risk - Based on previous experiences, the process and the project were optimized, in order to minimize environmental impact and allow simultaneous natural environment restoration and operation. Technical, practical and economical advantages became evident during all project fazes, from the initial project development to the conclusion of all decommissioning steps. Planing, conducts. adequate working methods and workers training, together, turned out to be the most efficient way for occupational radiological risk reduction. This efficiency was proved during operational tests and initial operation of the Complex. Radiation Protection optimization is achieved by worker's responsibility, turning safety corrections interference less frequents, rising consequently, minimizing environmental impact. (author)

  19. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil

    Science.gov (United States)

    Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W

    2016-01-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.

  20. Steps towards optimization of medical exposures

    International Nuclear Information System (INIS)

    Araujo, A.M.C.; Drexler, G.; Oliveira, S.M.V.

    1992-01-01

    Data about ionizing radiation sources used in medical applications and obtained through and IRD/CNEN National Programme together with Brazilian health authorities are discussed. The data presentation follows, as close as possible, suggestions given by the United Nations Scientific Committee on effects of Atomic Radiation (UNSCEAR). This study uses the geographic country division into five regions. The results have been also analyzed for each region. Due to many demographic, social and economic differences among these regions, some modifications are proposed to the UNSCEAR collection data model, to be used in developing countries with similar situations. This programme has two main aims : (1) to investigate radiation source and radiation protection status in Brazil, in order to give assistance to Brazilian health authorities to plan regional radiation control programmes and training for medical staffs : (2) to implement the system of protection in medical exposures, following the 1990 ICRP recommendations. This includes the justification of a practice in medical exposures, the optimization protection and the possible application of dose constraints. (author)

  1. Optimization design for the stepped impedance transformer based on the genetic algorithm

    International Nuclear Information System (INIS)

    Zou Dehui; Lai Wanchang; Qiu Dong

    2007-01-01

    This paper introduces the basic principium and mathematic model of the stepped impedance transformer, then puts the emphasis on comparing two kinds of design methods of the stepped impedance transformer. The design results are simulated by EDA, which indicates that genetic algorithm design is better than Chebyshev integrated design in the term of the most reflect coefficient's module. (authors)

  2. Optimization of the cascade with gas centrifuges for uranium enrichment

    International Nuclear Information System (INIS)

    Ozaki, N.; Harada, I.

    1976-01-01

    Computer programs to optimize the step and tapered-step cascades with gas centrifuges are developed. The 'Complex Method', one of the direct search method, is employed to find the optimum of the nonlinear function of several variables within a constrained region. The separation characteristics of the optimized step and tapered-step cascades are discussed in comparison with that of the ideal cascade. The local optima of the cascade profile, the convergence of the object function, and the stopping criterion for the optimization trial are also discussed. (author)

  3. Sequential optimization of a polygeneration plant

    International Nuclear Information System (INIS)

    Rubio-Maya, Carlos; Uche, Javier; Martinez, Amaya

    2011-01-01

    Highlights: → A two-steps optimization procedure of a polygeneration unit was tested. → First step was the synthesis and design; the superstructure definition was used. → Second step optimized the operation with hourly data and energy storage systems. → Remarkable benefits for the analyzed case study (Spanish hotel) were found. - Abstract: This paper presents a two-steps optimization procedure of a polygeneration unit. The unit simultaneously provides power, heat, cooling and fresh water to a Spanish tourist resort (450 rooms). The first step consist on the synthesis and design of the polygeneration scheme: a 'superstructure' was constructed to allow the selection of the appropriate choice and size of the plant components, from both economic and environmental considerations. At that first step, only monthly averaged requirements are considered. The second step includes hourly data and analysis as well as energy storage systems. A detailed modelling of pre-selected devices is then required to also fulfil economic and environmental constraints. As a result, a better performance is obtained compared to the first step. Thus, the two-steps procedure explained here permits the complete design and operation of a decentralized plant producing simultaneously energy (power, heat and cooling) but also desalted water (that is, trigeneration + desalination). Remarkable benefits for the analyzed case study are found: a Net Present Value of almost 300,000 Euro , a primary energy saving ratio of about 18% and more than 850 ton per year of avoided CO 2 emissions.

  4. First steps in combinatorial optimization on graphons: matchings

    Czech Academy of Sciences Publication Activity Database

    Doležal, Martin; Hladký, J.; Hu, P.; Piguet, Diana

    2017-01-01

    Roč. 61, August (2017), s. 359-365 ISSN 1571-0653 R&D Projects: GA ČR GA16-07378S; GA ČR GJ16-07822Y EU Projects: European Commission(XE) 628974 - PAECIDM Institutional support: RVO:67985840 ; RVO:67985807 Keywords : graphon * graph limits * matching * combinatorial optimization Subject RIV: BA - General Mathematics ; BA - General Mathematics (UIVT-O) OBOR OECD: Pure mathematics ; Pure mathematics (UIVT-O) http://www.sciencedirect.com/science/article/pii/S1571065317301452

  5. First steps in combinatorial optimization on graphons: matchings

    Czech Academy of Sciences Publication Activity Database

    Doležal, Martin; Hladký, J.; Hu, P.; Piguet, Diana

    2017-01-01

    Roč. 61, August (2017), s. 359-365 ISSN 1571-0653 R&D Projects: GA ČR GA16-07378S; GA ČR GJ16-07822Y EU Projects: European Commission(XE) 628974 - PAECIDM Institutional support: RVO:67985840 ; RVO:67985807 Keywords : graphon * graph limits * matching * combinatorial optimization Subject RIV: BA - General Mathematics; BA - General Mathematics (UIVT-O) OBOR OECD: Pure mathematics; Pure mathematics (UIVT-O) http://www.sciencedirect.com/science/article/pii/S1571065317301452

  6. [Collaborative application of BEPS at different time steps.

    Science.gov (United States)

    Lu, Wei; Fan, Wen Yi; Tian, Tian

    2016-09-01

    BEPSHourly is committed to simulate the ecological and physiological process of vegetation at hourly time steps, and is often applied to analyze the diurnal change of gross primary productivity (GPP), net primary productivity (NPP) at site scale because of its more complex model structure and time-consuming solving process. However, daily photosynthetic rate calculation in BEPSDaily model is simpler and less time-consuming, not involving many iterative processes. It is suitable for simulating the regional primary productivity and analyzing the spatial distribution of regional carbon sources and sinks. According to the characteristics and applicability of BEPSDaily and BEPSHourly models, this paper proposed a method of collaborative application of BEPS at daily and hourly time steps. Firstly, BEPSHourly was used to optimize the main photosynthetic parameters: the maximum rate of carboxylation (V c max ) and the maximum rate of photosynthetic electron transport (J max ) at site scale, and then the two optimized parameters were introduced into BEPSDaily model to estimate regional NPP at regional scale. The results showed that optimization of the main photosynthesis parameters based on the flux data could improve the simulate ability of the model. The primary productivity of different forest types in descending order was deciduous broad-leaved forest, mixed forest, coniferous forest in 2011. The collaborative application of carbon cycle models at different steps proposed in this study could effectively optimize the main photosynthesis parameters V c max and J max , simulate the monthly averaged diurnal GPP, NPP, calculate the regional NPP, and analyze the spatial distribution of regional carbon sources and sinks.

  7. Determination of the step dipole moment and the step line tension on Ag(0 0 1) electrodes

    International Nuclear Information System (INIS)

    Beltramo, G.L.; Ibach, H.; Linke, U.; Giesen, M.

    2008-01-01

    Using impedance spectroscopy, we determined the step dipole moment and the potential dependence of the step line tension of silver electrodes in contact with an electrolyte: (0 0 1) and vicinal surfaces (1 1 n) with n = 5, 7, 11 in 10 mM ClO 4 - -solutions were investigated. The step dipole moment is determined from the shift of the potential of zero charge (pzc) as a function of the surface step density. The dipole moment per step atom was found to be 3.5 ± 0.5 x 10 -3 e A. From the pzc and the potential dependence of the capacitance curves, the potential dependence of the surface tension of the vicinal surfaces is determined. The line tension of the steps is then calculated from the difference between the surface tensions of stepped (1 1 n) and the nominally step-free (0 0 1) surfaces. The results are compared to a previous study on Au(1 1 n) surfaces. For gold, the step line tension decreases roughly linear with potential, whereas a parabolic shape is observed for silver

  8. Distributed Robust Optimization in Networked System.

    Science.gov (United States)

    Wang, Shengnan; Li, Chunguang

    2016-10-11

    In this paper, we consider a distributed robust optimization (DRO) problem, where multiple agents in a networked system cooperatively minimize a global convex objective function with respect to a global variable under the global constraints. The objective function can be represented by a sum of local objective functions. The global constraints contain some uncertain parameters which are partially known, and can be characterized by some inequality constraints. After problem transformation, we adopt the Lagrangian primal-dual method to solve this problem. We prove that the primal and dual optimal solutions of the problem are restricted in some specific sets, and we give a method to construct these sets. Then, we propose a DRO algorithm to find the primal-dual optimal solutions of the Lagrangian function, which consists of a subgradient step, a projection step, and a diffusion step, and in the projection step of the algorithm, the optimized variables are projected onto the specific sets to guarantee the boundedness of the subgradients. Convergence analysis and numerical simulations verifying the performance of the proposed algorithm are then provided. Further, for nonconvex DRO problem, the corresponding approach and algorithm framework are also provided.

  9. Single step biotransformation of corn oil phytosterols to boldenone by a newly isolated Pseudomonas aeruginosa

    Directory of Open Access Journals (Sweden)

    Mohamed Eisa

    2016-09-01

    Full Text Available A new potent Pseudomonas aeruginosa isolate capable for biotransformation of corn oil phytosterol (PS to 4-androstene-3, 17-dione (AD, testosterone (T and boldenone (BOL was identified by phenotypic analysis and 16S rRNA gene sequencing. Sequential statistical strategy was used to optimize the biotransformation process mainly concerning BOL using Factorial design and response surface methodology (RSM. The production of BOL in single step microbial biotransformation from corn oil phytosterols by P. aeruginosa was not previously reported. Results showed that the pH concentration of the medium, (NH42SO4 and KH2PO4 were the most significant factors affecting BOL production. By analyzing the statistical model of three-dimensional surface plot, BOL production increased from 36.8% to 42.4% after the first step of optimization, and the overall biotransformation increased to 51.9%. After applying the second step of the sequential statistical strategy BOL production increased to 53.6%, and the overall biotransformation increased to 91.9% using the following optimized medium composition (g/l distilled water (NH42SO4, 2; KH2PO4, 4; Na2HPO4. 1; MgSO4·7H2O, 0.3; NaCl, 0.1; CaCl2·2H2O, 0.1; FeSO4·7H2O, 0.001; ammonium acetate 0.001; Tween 80, 0.05%; corn oil 0.5%; 8-hydroxyquinoline 0.016; pH 8; 200 rpm agitation speed and incubation time 36 h at 30 °C. Validation experiments proved the adequacy and accuracy of model, and the results showed the predicted value agreed well with the experimental values.

  10. Cost-utility Analysis: Thiopurines Plus Endoscopy-guided Biological Step-up Therapy is the Optimal Management of Postoperative Crohn's Disease.

    Science.gov (United States)

    Candia, Roberto; Naimark, David; Sander, Beate; Nguyen, Geoffrey C

    2017-11-01

    Postoperative recurrence of Crohn's disease is common. This study sought to assess whether the postoperative management should be based on biological therapy alone or combined with thiopurines and whether the therapy should be started immediately after surgery or guided by either endoscopic or clinical recurrence. A Markov model was developed to estimate expected health outcomes in quality-adjusted life years (QALYs) and costs in Canadian dollars (CAD$) accrued by hypothetical patients with high recurrence risk after ileocolic resection. Eight strategies of postoperative management were evaluated. A lifetime time horizon, an annual discount rate of 5%, a societal perspective, and a cost-effectiveness threshold of 50,000 CAD$/QALY were assumed. Deterministic and probabilistic sensitivity analyses were conducted. The model was validated against randomized trials and historical cohorts. Three strategies dominated the others: endoscopy-guided full step-up therapy (14.80 QALYs, CAD$ 462,180), thiopurines immediately post-surgery plus endoscopy-guided biological step-up therapy (14.89 QALYs, CAD$ 464,099) and combination therapy immediately post-surgery (14.94 QALYs, CAD$ 483,685). The second strategy was the most cost-effective, assuming a cost-effectiveness threshold of 50,000 CAD$/QALY. Probabilistic sensitivity analysis showed that the second strategy has the highest probability of being the optimal alternative in all comparisons at cost-effectiveness thresholds from 30,000 to 100,000 CAD$/QALY. The strategies guided only by clinical recurrence and those using biologics alone were dominated. According to this decision analysis, thiopurines immediately after surgery and addition of biologics guided by endoscopic recurrence is the optimal strategy of postoperative management in patients with Crohn's disease with high risk of recurrence (see Video Abstract, Supplemental Digital Content 1, http://links.lww.com/IBD/B654).

  11. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  12. Numerical modeling and optimization of the Iguassu gas centrifuge

    Science.gov (United States)

    Bogovalov, S. V.; Borman, V. D.; Borisevich, V. D.; Tronin, V. N.; Tronin, I. V.

    2017-07-01

    The full procedure of the numerical calculation of the optimized parameters of the Iguassu gas centrifuge (GC) is under discussion. The procedure consists of a few steps. On the first step the problem of a hydrodynamical flow of the gas in the rotating rotor of the GC is solved numerically. On the second step the problem of diffusion of the binary mixture of isotopes is solved. The separation power of the gas centrifuge is calculated after that. On the last step the time consuming procedure of optimization of the GC is performed providing us the maximum of the separation power. The optimization is based on the BOBYQA method exploring the results of numerical simulations of the hydrodynamics and diffusion of the mixture of isotopes. Fast convergence of calculations is achieved due to exploring of a direct solver at the solution of the hydrodynamical and diffusion parts of the problem. Optimized separative power and optimal internal parameters of the Iguassu GC with 1 m rotor were calculated using the developed approach. Optimization procedure converges in 45 iterations taking 811 minutes.

  13. Step-by-step manual for planning and performing bifurcation PCI: a resource-tailored approach.

    Science.gov (United States)

    Milasinovic, Dejan; Wijns, William; Ntsekhe, Mpiko; Hellig, Farrel; Mohamed, Awad; Stankovic, Goran

    2018-02-02

    As bifurcation PCI can often be resource-demanding due to the use of multiple guidewires, balloons and stents, different technical options are sometimes being explored, in different local settings, to meet the need of optimally treating a patient with a bifurcation lesion, while being confronted with limited material resources. Therefore, it seems important to keep a proper balance between what is recognised as the contemporary state of the art, and what is known to be potentially harmful and to be discouraged. Ultimately, the resource-tailored approach to bifurcation PCI may be characterised by the notion of minimum technical requirements for each step of a successful procedure. Hence, this paper describes the logical sequence of steps when performing bifurcation PCI with provisional SB stenting, starting with basic anatomy assessment and ending with the optimisation of MB stenting and the evaluation of the potential need to stent the SB, suggesting, for each step, the minimum technical requirement for a successful intervention.

  14. BWR fuel cycle optimization using neural networks

    International Nuclear Information System (INIS)

    Ortiz-Servin, Juan Jose; Castillo, Jose Alejandro; Pelta, David Alejandro

    2011-01-01

    Highlights: → OCONN a new system to optimize all nuclear fuel management steps in a coupled way. → OCON is based on an artificial recurrent neural network to find the best combination of partial solutions to each fuel management step. → OCONN works with a fuel lattices' stock, a fuel reloads' stock and a control rod patterns' stock, previously obtained with different heuristic techniques. → Results show OCONN is able to find good combinations according the global objective function. - Abstract: In nuclear fuel management activities for BWRs, four combinatorial optimization problems are solved: fuel lattice design, axial fuel bundle design, fuel reload design and control rod patterns design. Traditionally, these problems have been solved in separated ways due to their complexity and the required computational resources. In the specialized literature there are some attempts to solve fuel reloads and control rod patterns design or fuel lattice and axial fuel bundle design in a coupled way. In this paper, the system OCONN to solve all of these problems in a coupled way is shown. This system is based on an artificial recurrent neural network to find the best combination of partial solutions to each problem, in order to maximize a global objective function. The new system works with a fuel lattices' stock, a fuel reloads' stock and a control rod patterns' stock, previously obtained with different heuristic techniques. The system was tested to design an equilibrium cycle with a cycle length of 18 months. Results show that the new system is able to find good combinations. Cycle length is reached and safety parameters are fulfilled.

  15. Modified random hinge transport mechanics and multiple scattering step-size selection in EGS5

    International Nuclear Information System (INIS)

    Wilderman, S.J.; Bielajew, A.F.

    2005-01-01

    The new transport mechanics in EGS5 allows for significantly longer electron transport step sizes and hence shorter computation times than required for identical problems in EGS4. But as with all Monte Carlo electron transport algorithms, certain classes of problems exhibit step-size dependencies even when operating within recommended ranges, sometimes making selection of step-sizes a daunting task for novice users. Further contributing to this problem, because of the decoupling of multiple scattering and continuous energy loss in the dual random hinge transport mechanics of EGS5, there are two independent step sizes in EGS5, one for multiple scattering and one for continuous energy loss, each of which influences speed and accuracy in a different manner. Further, whereas EGS4 used a single value of fractional energy loss (ESTEPE) to determine step sizes at all energies, to increase performance by decreasing the amount of effort expended simulating lower energy particles, EGS5 permits the fractional energy loss values which are used to determine both the multiple scattering and continuous energy loss step sizes to vary with energy. This results in requiring the user to specify four fractional energy loss values when optimizing computations for speed. Thus, in order to simplify step-size selection and to mitigate step-size dependencies, a method has been devised to automatically optimize step-size selection based on a single material dependent input related to the size of problem tally region. In this paper we discuss the new transport mechanics in EGS5 and describe the automatic step-size optimization algorithm. (author)

  16. Comparison of Algorithms for the Optimal Location of Control Valves for Leakage Reduction in WDNs

    Directory of Open Access Journals (Sweden)

    Enrico Creaco

    2018-04-01

    Full Text Available The paper presents the comparison of two different algorithms for the optimal location of control valves for leakage reduction in water distribution networks (WDNs. The former is based on the sequential addition (SA of control valves. At the generic step Nval of SA, the search for the optimal combination of Nval valves is carried out, while containing the optimal combination of Nval − 1 valves found at the previous step. Therefore, only one new valve location is searched for at each step of SA, among all the remaining available locations. The latter algorithm consists of a multi-objective genetic algorithm (GA, in which valve locations are encoded inside individual genes. For the sake of consistency, the same embedded algorithm, based on iterated linear programming (LP, was used inside SA and GA, to search for the optimal valve settings at various time slots in the day. The results of applications to two WDNs show that SA and GA yield identical results for small values of Nval. When this number grows, the limitations of SA, related to its reduced exploration of the research space, emerge. In fact, for higher values of Nval, SA tends to produce less beneficial valve locations in terms of leakage abatement. However, the smaller computation time of SA may make this algorithm preferable in the case of large WDNs, for which the application of GA would be overly burdensome.

  17. Perturbed Strong Stability Preserving Time-Stepping Methods For Hyperbolic PDEs

    KAUST Repository

    Hadjimichael, Yiannis

    2017-09-30

    A plethora of physical phenomena are modelled by hyperbolic partial differential equations, for which the exact solution is usually not known. Numerical methods are employed to approximate the solution to hyperbolic problems; however, in many cases it is difficult to satisfy certain physical properties while maintaining high order of accuracy. In this thesis, we develop high-order time-stepping methods that are capable of maintaining stability constraints of the solution, when coupled with suitable spatial discretizations. Such methods are called strong stability preserving (SSP) time integrators, and we mainly focus on perturbed methods that use both upwind- and downwind-biased spatial discretizations. Firstly, we introduce a new family of third-order implicit Runge–Kuttas methods with arbitrarily large SSP coefficient. We investigate the stability and accuracy of these methods and we show that they perform well on hyperbolic problems with large CFL numbers. Moreover, we extend the analysis of SSP linear multistep methods to semi-discretized problems for which different terms on the right-hand side of the initial value problem satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain augmented monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding non-additive SSP linear multistep methods. Furthermore, we develop the first SSP linear multistep methods of order two and three with variable step size, and study their optimality. We describe an optimal step-size strategy and demonstrate the effectiveness of these methods on various one- and multi-dimensional problems. Finally, we establish necessary conditions

  18. On simultaneous shape and orientational design for eigenfrequency optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2007-01-01

    Plates with an internal hole of fixed area are designed in order to maximize the performance with respect to eigenfrequencies. The optimization is performed by simultaneous shape, material, and orientational design. The shape of the hole is designed, and the material design is the design of an or......Plates with an internal hole of fixed area are designed in order to maximize the performance with respect to eigenfrequencies. The optimization is performed by simultaneous shape, material, and orientational design. The shape of the hole is designed, and the material design is the design...... of an orthotropic material that can be considered as a fiber-net within each finite element. This fiber-net is optimally oriented in the individual elements of the finite element discretization. The optimizations are performed using the finite element method for analysis, and the optimization approach is a two......-step method. In the first step, we find the best design on the basis of a recursive optimization procedure based on optimality criteria. In the second step, mathematical programming and sensitivity analysis are applied to find the final optimized design....

  19. Biometric Quantization through Detection Rate Optimized Bit Allocation

    Directory of Open Access Journals (Sweden)

    C. Chen

    2009-01-01

    Full Text Available Extracting binary strings from real-valued biometric templates is a fundamental step in many biometric template protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch, and helper data systems. Previous work has been focusing on the design of optimal quantization and coding for each single feature component, yet the binary string—concatenation of all coded feature components—is not optimal. In this paper, we present a detection rate optimized bit allocation (DROBA principle, which assigns more bits to discriminative features and fewer bits to nondiscriminative features. We further propose a dynamic programming (DP approach and a greedy search (GS approach to achieve DROBA. Experiments of DROBA on the FVC2000 fingerprint database and the FRGC face database show good performances. As a universal method, DROBA is applicable to arbitrary biometric modalities, such as fingerprint texture, iris, signature, and face. DROBA will bring significant benefits not only to the template protection systems but also to the systems with fast matching requirements or constrained storage capability.

  20. Step-by-step phacoemulsification training program for ophthalmology residents

    Directory of Open Access Journals (Sweden)

    Wang Yulan

    2013-01-01

    Full Text Available Aims: The aim was to analyze the learning curve of phacoemulsification (phaco performed by residents without experience in performing extra-capsular cataract extraction (ECCE in a step-by-step training program (SBSTP. Materials and Methods: Consecutive surgical records of phaco performed from March 2009 to Sept 2011 by four residents without previous ECCE experience were retrospectively reviewed. The completion rate of the first 30 procedures by each resident was calculated. The main intraoperative phaco parameter records for the first 30 surgeries by each resident were compared with those for their last 30 surgeries. Intraoperative complications in the residents′ procedures were also recorded and analyzed. Results: A total of 1013 surgeries were performed by residents. The completion rate for the first 30 phaco procedures was 79.2 μ 5.8%. The main reasons for halting the procedure were as follows: Anterior capsule tear, inability to crack the nucleus, and posterior capsular rupture during phaco or cortex removal. Cumulative dissipated energy of phaco power used during the surgeries was significantly less in the last 30 cases compared with the first 30 cases (30.10 μ 17.58 vs. 55.41 μ 37.59, P = 0.021. Posterior capsular rupture rate was 2.5 μ 1.2% in total (10.8 μ 4.2% in the first 30 cases and 1.7 μ 1.9% in the last 30 cases, P = 0.008; a statistically significant difference. Conclusion:The step-by-step training program might be a necessary process for a resident to transit from dependence to a self-supported operator. It is also an essential middle step between wet lab training to performing the entire phaco procedure on the patient both effectively and safely.

  1. An improved algorithm to convert CAD model to MCNP geometry model based on STEP file

    International Nuclear Information System (INIS)

    Zhou, Qingguo; Yang, Jiaming; Wu, Jiong; Tian, Yanshan; Wang, Junqiong; Jiang, Hai; Li, Kuan-Ching

    2015-01-01

    Highlights: • Fully exploits common features of cells, making the processing efficient. • Accurately provide the cell position. • Flexible to add new parameters in the structure. • Application of novel structure in INP file processing, conveniently evaluate cell location. - Abstract: MCNP (Monte Carlo N-Particle Transport Code) is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron, or coupled neutron/photon/electron transport. Its input file, the INP file, has the characteristics of complicated form and is error-prone when describing geometric models. Due to this, a conversion algorithm that can solve the problem by converting general geometric model to MCNP model during MCNP aided modeling is highly needed. In this paper, we revised and incorporated a number of improvements over our previous work (Yang et al., 2013), which was proposed and targeted after STEP file and INP file were analyzed. Results of experiments show that the revised algorithm is more applicable and efficient than previous work, with the optimized extraction of geometry and topology information of the STEP file, as well as the production efficiency of output INP file. This proposed research is promising, and serves as valuable reference for the majority of researchers involved with MCNP-related researches

  2. The Throw-and-Catch Model of Human Gait: Evidence from Coupling of Pre-Step Postural Activity and Step Location

    Science.gov (United States)

    Bancroft, Matthew J.; Day, Brian L.

    2016-01-01

    Postural activity normally precedes the lift of a foot from the ground when taking a step, but its function is unclear. The throw-and-catch hypothesis of human gait proposes that the pre-step activity is organized to generate momentum for the body to fall ballistically along a specific trajectory during the step. The trajectory is appropriate for the stepping foot to land at its intended location while at the same time being optimally placed to catch the body and regain balance. The hypothesis therefore predicts a strong coupling between the pre-step activity and step location. Here we examine this coupling when stepping to visually-presented targets at different locations. Ten healthy, young subjects were instructed to step as accurately as possible onto targets placed in five locations that required either different step directions or different step lengths. In 75% of trials, the target location remained constant throughout the step. In the remaining 25% of trials, the intended step location was changed by making the target jump to a new location 96 ms ± 43 ms after initiation of the pre-step activity, long before foot lift. As predicted by the throw-and-catch hypothesis, when the target location remained constant, the pre-step activity led to body momentum at foot lift that was coupled to the intended step location. When the target location jumped, the pre-step activity was adjusted (median latency 223 ms) and prolonged (on average by 69 ms), which altered the body’s momentum at foot lift according to where the target had moved. We conclude that whenever possible the coupling between the pre-step activity and the step location is maintained. This provides further support for the throw-and-catch hypothesis of human gait. PMID:28066208

  3. The Throw-and-Catch Model of Human Gait: Evidence from Coupling of Pre-Step Postural Activity and Step Location.

    Science.gov (United States)

    Bancroft, Matthew J; Day, Brian L

    2016-01-01

    Postural activity normally precedes the lift of a foot from the ground when taking a step, but its function is unclear. The throw-and-catch hypothesis of human gait proposes that the pre-step activity is organized to generate momentum for the body to fall ballistically along a specific trajectory during the step. The trajectory is appropriate for the stepping foot to land at its intended location while at the same time being optimally placed to catch the body and regain balance. The hypothesis therefore predicts a strong coupling between the pre-step activity and step location. Here we examine this coupling when stepping to visually-presented targets at different locations. Ten healthy, young subjects were instructed to step as accurately as possible onto targets placed in five locations that required either different step directions or different step lengths. In 75% of trials, the target location remained constant throughout the step. In the remaining 25% of trials, the intended step location was changed by making the target jump to a new location 96 ms ± 43 ms after initiation of the pre-step activity, long before foot lift. As predicted by the throw-and-catch hypothesis, when the target location remained constant, the pre-step activity led to body momentum at foot lift that was coupled to the intended step location. When the target location jumped, the pre-step activity was adjusted (median latency 223 ms) and prolonged (on average by 69 ms), which altered the body's momentum at foot lift according to where the target had moved. We conclude that whenever possible the coupling between the pre-step activity and the step location is maintained. This provides further support for the throw-and-catch hypothesis of human gait.

  4. Optimal control linear quadratic methods

    CERN Document Server

    Anderson, Brian D O

    2007-01-01

    This augmented edition of a respected text teaches the reader how to use linear quadratic Gaussian methods effectively for the design of control systems. It explores linear optimal control theory from an engineering viewpoint, with step-by-step explanations that show clearly how to make practical use of the material.The three-part treatment begins with the basic theory of the linear regulator/tracker for time-invariant and time-varying systems. The Hamilton-Jacobi equation is introduced using the Principle of Optimality, and the infinite-time problem is considered. The second part outlines the

  5. First-step nucleation growth dependence of InAs/InGaAs/InP quantum dot formation in two-step growth

    International Nuclear Information System (INIS)

    Yin Zongyou; Tang Xiaohong; Deny, Sentosa; Chin, Mee Koy; Zhang Jixuan; Teng Jinghua; Du Anyan

    2008-01-01

    First-step nucleation growth has an important impact on the two-step growth of high-quality mid-infrared emissive InAs/InGaAs/InP quantum dots (QDs). It has been found that an optimized growth rate for first-step nucleation is critical for forming QDs with narrow size distribution, high dot density and high crystal quality. High growth temperature has an advantage in removing defects in the QDs formed, but the dot density will be reduced. Contrasting behavior in forming InAs QDs using metal-organic vapor phase epitaxy (MOVPE) by varying the input flux ratio of group-V versus group-III source (V/III ratio) in the first-step nucleation growth has been observed and investigated. High-density, 2.5 x 10 10 cm -2 , InAs QDs emitting at>2.15 μm have been formed with narrow size distribution, ∼1 nm standard deviation, by reducing the V/III ratio to zero in first-step nucleation growth

  6. Optimization of experimental conditions for the monitoring of nucleation and growth of racemic Diprophylline from the supercooled melt

    Science.gov (United States)

    Lemercier, Aurélien; Viel, Quentin; Brandel, Clément; Cartigny, Yohann; Dargent, Eric; Petit, Samuel; Coquerel, Gérard

    2017-08-01

    Since more and more pharmaceutical substances are developed as amorphous forms, it is nowadays of major relevance to get insights into the nucleation and growth mechanisms from supercooled melts (SCM). A step-by-step approach of recrystallization from a SCM is presented here, designed to elucidate the impact of various experimental parameters. Using the bronchodilator agent Diprophylline (DPL) as a model compound, it is shown that optimal conditions for informative observations of the crystallization behaviour from supercooled racemic DPL require to place samples between two cover slides with a maximum sample thickness of 20 μm, and to monitor recrystallization during an annealing step of 30 min at 70 °C, i.e. about 33 °C above the temperature of glass transition. In these optimized conditions, it could be established that DPL crystallization proceeds in two steps: spontaneous nucleation and growth of large and well-faceted particles of a new crystal form (primary crystals: PC) and subsequent crystallization of a previously known form (RII) that develops from specific surfaces of PC. The formation of PC particles therefore constitutes the key-step of the crystallization events and is shown to be favoured by at least 2.33 wt% of the major chemical impurity, Theophylline.

  7. PWR fuel management optimization

    International Nuclear Information System (INIS)

    Dumas, Michel.

    1981-10-01

    This report is aimed to the optimization of the refueling pattern of a nuclear reactor. At the beginning of a reactor cycle a batch of fuel assemblies is available: the physical properties of the assemblies are known: the mathematical problem is to determine the refueling pattern which maximizes the reactivity or which provides the flattest possible power distribution. The state of the core is mathematically characterized by a system of partial derivative equations, its smallest eigenvalue and the associated eigenvector. After a study of the convexity properties of the problem, two algorithms are proposed. The first one exhanges assemblies to improve the starting configurations. The enumeration of the exchanges is limited to the 2 by 2, 3 by 3, 4 by 4 permutations. The second one builds a solution in two steps: in the first step the discrete variables are replaced by continuous variables. The non linear optimization problem obtained is solved by ''the Method of Approximation Programming'' and in the second step, the refuelling pattern which provides the best approximation of the optimal power distribution is searched by a Branch an d Bound Method [fr

  8. Effect of beamlet step-size on IMRT plan quality

    International Nuclear Information System (INIS)

    Zhang Guowei; Jiang Ziping; Shepard, David; Earl, Matt; Yu, Cedric

    2005-01-01

    We have studied the degree to which beamlet step-size impacts the quality of intensity modulated radiation therapy (IMRT) treatment plans. Treatment planning for IMRT begins with the application of a grid that divides each beam's-eye-view of the target into a number of smaller beamlets (pencil beams) of radiation. The total dose is computed as a weighted sum of the dose delivered by the individual beamlets. The width of each beamlet is set to match the width of the corresponding leaf of the multileaf collimator (MLC). The length of each beamlet (beamlet step-size) is parallel to the direction of leaf travel. The beamlet step-size represents the minimum stepping distance of the leaves of the MLC and is typically predetermined by the treatment planning system. This selection imposes an artificial constraint because the leaves of the MLC and the jaws can both move continuously. Removing the constraint can potentially improve the IMRT plan quality. In this study, the optimized results were achieved using an aperture-based inverse planning technique called direct aperture optimization (DAO). We have tested the relationship between pencil beam step-size and plan quality using the American College of Radiology's IMRT test case. For this case, a series of IMRT treatment plans were produced using beamlet step-sizes of 1, 2, 5, and 10 mm. Continuous improvements were seen with each reduction in beamlet step size. The maximum dose to the planning target volume (PTV) was reduced from 134.7% to 121.5% and the mean dose to the organ at risk (OAR) was reduced from 38.5% to 28.2% as the beamlet step-size was reduced from 10 to 1 mm. The smaller pencil beam sizes also led to steeper dose gradients at the junction between the target and the critical structure with gradients of 6.0, 7.6, 8.7, and 9.1 dose%/mm achieved for beamlet step sizes of 10, 5, 2, and 1 mm, respectively

  9. Optimization of stepped-cone CVT for lower-limb exoskeletons

    Directory of Open Access Journals (Sweden)

    Ashish Singla

    2016-09-01

    Full Text Available Wearable exoskeletons offer interesting possibilities to address the global concerns of the ageing society and hence many researchers and industries are investing significant resources to develop new innovations in the area of physical assistance. An important issue in providing effective physical assistance is how the needed torques can be generated efficiently and effectively. This paper considers this area and explores the use of continuous variable transmissions (CVT for up-grading/downgrading torques so that the torque variations for performing motions of normal daily living can be provided. The knee joint is focused upon to develop the key stages of the CVT based approach in generating motion torques. From our on-going research to developing assistive exoskeletons for support activities of daily living it has been found that 6.3–20.6 Nm torque is required to provide 10–20% assistance at the knee joint of a healthy elderly person having weight 70–90 kg. The challenge here is to miniaturize conventional CVTs developed for the automobiles where large torques are needed. To achieve the required torque range for supporting human joints in various motions, a CVT is designed and its parameters optimized. Results are validated via a professional optimization software.

  10. Online algorithms for optimal energy distribution in microgrids

    CERN Document Server

    Wang, Yu; Nelms, R Mark

    2015-01-01

    Presenting an optimal energy distribution strategy for microgrids in a smart grid environment, and featuring a detailed analysis of the mathematical techniques of convex optimization and online algorithms, this book provides readers with essential content on how to achieve multi-objective optimization that takes into consideration power subscribers, energy providers and grid smoothing in microgrids. Featuring detailed theoretical proofs and simulation results that demonstrate and evaluate the correctness and effectiveness of the algorithm, this text explains step-by-step how the problem can b

  11. Optimal control with aerospace applications

    CERN Document Server

    Longuski, James M; Prussing, John E

    2014-01-01

    Want to know not just what makes rockets go up but how to do it optimally? Optimal control theory has become such an important field in aerospace engineering that no graduate student or practicing engineer can afford to be without a working knowledge of it. This is the first book that begins from scratch to teach the reader the basic principles of the calculus of variations, develop the necessary conditions step-by-step, and introduce the elementary computational techniques of optimal control. This book, with problems and an online solution manual, provides the graduate-level reader with enough introductory knowledge so that he or she can not only read the literature and study the next level textbook but can also apply the theory to find optimal solutions in practice. No more is needed than the usual background of an undergraduate engineering, science, or mathematics program: namely calculus, differential equations, and numerical integration. Although finding optimal solutions for these problems is a...

  12. Incorporating prior knowledge into beam orientation optimization in IMRT

    International Nuclear Information System (INIS)

    Pugachev, Andrei M.S.; Lei Xing

    2002-01-01

    Purpose: Selection of beam configuration in currently available intensity-modulated radiotherapy (IMRT) treatment planning systems is still based on trial-and-error search. Computer beam orientation optimization has the potential to improve the situation, but its practical implementation is hindered by the excessive computing time associated with the calculation. The purpose of this work is to provide an effective means to speed up the beam orientation optimization by incorporating a priori geometric and dosimetric knowledge of the system and to demonstrate the utility of the new algorithm for beam placement in IMRT. Methods and Materials: Beam orientation optimization was performed in two steps. First, the quality of each possible beam orientation was evaluated using beam's-eye-view dosimetrics (BEVD) developed in our previous study. A simulated annealing algorithm was then employed to search for the optimal set of beam orientations, taking into account the BEVD scores of different incident beam directions. During the calculation, sampling of gantry angles was weighted according to the BEVD score computed before the optimization. A beam direction with a higher BEVD score had a higher probability of being included in the trial configuration, and vice versa. The inclusion of the BEVD weighting in the stochastic beam angle sampling process made it possible to avoid spending valuable computing time unnecessarily at 'bad' beam angles. An iterative inverse treatment planning algorithm was used for beam intensity profile optimization during the optimization process. The BEVD-guided beam orientation optimization was applied to an IMRT treatment of paraspinal tumor. The advantage of the new optimization algorithm was demonstrated by comparing the calculation with the conventional scheme without the BEVD weighting in the beam sampling. Results: The BEVD tool provided useful guidance for the selection of the potentially good directions for the beams to incident and was used

  13. Automation and Optimization of Multipulse Laser Zona Drilling of Mouse Embryos During Embryo Biopsy.

    Science.gov (United States)

    Wong, Christopher Yee; Mills, James K

    2017-03-01

    Laser zona drilling (LZD) is a required step in many embryonic surgical procedures, for example, assisted hatching and preimplantation genetic diagnosis. LZD involves the ablation of the zona pellucida (ZP) using a laser while minimizing potentially harmful thermal effects on critical internal cell structures. Develop a method for the automation and optimization of multipulse LZD, applied to cleavage-stage embryos. A two-stage optimization is used. The first stage uses computer vision algorithms to identify embryonic structures and determines the optimal ablation zone farthest away from critical structures such as blastomeres. The second stage combines a genetic algorithm with a previously reported thermal analysis of LZD to optimize the combination of laser pulse locations and pulse durations. The goal is to minimize the peak temperature experienced by the blastomeres while creating the desired opening in the ZP. A proof of concept of the proposed LZD automation and optimization method is demonstrated through experiments on mouse embryos with positive results, as adequately sized openings are created. Automation of LZD is feasible and is a viable step toward the automation of embryo biopsy procedures. LZD is a common but delicate procedure performed by human operators using subjective methods to gauge proper LZD procedure. Automation of LZD removes human error to increase the success rate of LZD. Although the proposed methods are developed for cleavage-stage embryos, the same methods may be applied to most types LZD procedures, embryos at different developmental stages, or nonembryonic cells.

  14. Sub-Riemannian geometry and optimal transport

    CERN Document Server

    Rifford, Ludovic

    2014-01-01

    The book provides an introduction to sub-Riemannian geometry and optimal transport and presents some of the recent progress in these two fields. The text is completely self-contained: the linear discussion, containing all the proofs of the stated results, leads the reader step by step from the notion of distribution at the very beginning to the existence of optimal transport maps for Lipschitz sub-Riemannian structure. The combination of geometry presented from an analytic point of view and of optimal transport, makes the book interesting for a very large community. This set of notes grew from a series of lectures given by the author during a CIMPA school in Beirut, Lebanon.

  15. Numerical optimization of laboratory combustor geometry for NO suppression

    International Nuclear Information System (INIS)

    Mazaheri, Karim; Shakeri, Alireza

    2016-01-01

    Highlights: • A five-step kinetics for NO and CO prediction is extracted from GRI-3.0 mechanism. • Accuracy and applicability of this kinetics for numerical optimization were shown. • Optimized geometry for a combustor was determined using the combined process. • NO emission from optimized geometry is found 10.3% lower than the basis geometry. - Abstract: In this article, geometry optimization of a jet stirred reactor (JSR) combustor has been carried out for minimum NO emissions in methane oxidation using a combined numerical algorithm based on computational fluid dynamics (CFD) and differential evolution (DE) optimization. The optimization algorithm is also used to find a fairly accurate reduced mechanism. The combustion kinetics is based on a five-step mechanism with 17 unknowns which is obtained using an optimization DE algorithm for a PSR–PFR reactor based on GRI-3.0 full mechanism. The optimization design variables are the unknowns of the five-step mechanism and the cost function is the concentration difference of pollutants obtained from the 5-step mechanism and the full mechanism. To validate the flow solver and the chemical kinetics, the computed NO at the outlet of the JSR is compared with experiments. To optimize the geometry of a combustor, the JSR combustor geometry is modeled using three parameters (i.e., design variables). An integrated approach using a flow solver and the DE optimization algorithm produces the lowest NO concentrations. Results show that the exhaust NO emission for the optimized geometry is 10.3% lower than the original geometry, while the inlet temperature of the working fluid and the concentration of O_2 are operating constraints. In addition, the concentration of CO pollutant is also much less than the original chamber.

  16. Atomic-scale friction on stepped surfaces of ionic crystals.

    Science.gov (United States)

    Steiner, Pascal; Gnecco, Enrico; Krok, Franciszek; Budzioch, Janusz; Walczak, Lukasz; Konior, Jerzy; Szymonski, Marek; Meyer, Ernst

    2011-05-06

    We report on high-resolution friction force microscopy on a stepped NaCl(001) surface in ultrahigh vacuum. The measurements were performed on single cleavage step edges. When blunt tips are used, friction is found to increase while scanning both up and down a step edge. With atomically sharp tips, friction still increases upwards, but it decreases and even changes sign downwards. Our observations extend previous results obtained without resolving atomic features and are associated with the competition between the Schwöbel barrier and the asymmetric potential well accompanying the step edges.

  17. N-polar GaN/AlGaN/GaN metal-insulator-semiconductor high-electron-mobility transistor formed on sapphire substrate with minimal step bunching

    Science.gov (United States)

    Prasertsuk, Kiattiwut; Tanikawa, Tomoyuki; Kimura, Takeshi; Kuboya, Shigeyuki; Suemitsu, Tetsuya; Matsuoka, Takashi

    2018-01-01

    The metal-insulator-semiconductor (MIS) gate N-polar GaN/AlGaN/GaN high-electron-mobility transistor (HEMT) on a (0001) sapphire substrate, which can be expected to operate with lower on-resistance and more easily work on the pinch-off operation than an N-polar AlGaN/GaN HEMT, was fabricated. For suppressing the step bunching and hillocks peculiar in the N-polar growth, a sapphire substrate with an off-cut angle as small as 0.8° was introduced and an N-polar GaN/AlGaN/GaN HEMT without the step bunching was firstly obtained by optimizing the growth conditions. The previously reported anisotropy of transconductance related to the step was eliminated. The pinch-off operation was also realized. These results indicate that this device is promising.

  18. SPAR-H Step-by-Step Guidance

    Energy Technology Data Exchange (ETDEWEB)

    W. J. Galyean; A. M. Whaley; D. L. Kelly; R. L. Boring

    2011-05-01

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.

  19. SPAR-H Step-by-Step Guidance

    International Nuclear Information System (INIS)

    Galyean, W.J.; Whaley, A.M.; Kelly, D.L.; Boring, R.L.

    2011-01-01

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.

  20. SPAR-H Step-by-Step Guidance

    Energy Technology Data Exchange (ETDEWEB)

    April M. Whaley; Dana L. Kelly; Ronald L. Boring; William J. Galyean

    2012-06-01

    Step-by-step guidance was developed recently at Idaho National Laboratory for the US Nuclear Regulatory Commission on the use of the Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method for quantifying Human Failure Events (HFEs). This work was done to address SPAR-H user needs, specifically requests for additional guidance on the proper application of various aspects of the methodology. This paper overviews the steps of the SPAR-H analysis process and highlights some of the most important insights gained during the development of the step-by-step directions. This supplemental guidance for analysts is applicable when plant-specific information is available, and goes beyond the general guidance provided in existing SPAR-H documentation. The steps highlighted in this paper are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff.

  1. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé ; Vigneron, Antoine E.

    2013-01-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance

  2. Intermediate surface structure between step bunching and step flow in SrRuO3 thin film growth

    Science.gov (United States)

    Bertino, Giulia; Gura, Anna; Dawber, Matthew

    We performed a systematic study of SrRuO3 thin films grown on TiO2 terminated SrTiO3 substrates using off-axis magnetron sputtering. We investigated the step bunching formation and the evolution of the SRO film morphology by varying the step size of the substrate, the growth temperature and the film thickness. The thin films were characterized using Atomic Force Microscopy and X-Ray Diffraction. We identified single and multiple step bunching and step flow growth regimes as a function of the growth parameters. Also, we clearly observe a stronger influence of the step size of the substrate on the evolution of the SRO film surface with respect to the other growth parameters. Remarkably, we observe the formation of a smooth, regular and uniform ``fish skin'' structure at the transition between one regime and another. We believe that the fish skin structure results from the merging of 2D flat islands predicted by previous models. The direct observation of this transition structure allows us to better understand how and when step bunching develops in the growth of SrRuO3 thin films.

  3. Coordination of push-off and collision determine the mechanical work of step-to-step transitions when isolated from human walking.

    Science.gov (United States)

    Soo, Caroline H; Donelan, J Maxwell

    2012-02-01

    In human walking, each transition to a new stance limb requires redirection of the center of mass (COM) velocity from one inverted pendulum arc to the next. While this can be accomplished with either negative collision work by the leading limb, positive push-off work by the trailing limb, or some combination of the two, physics-based models of step-to-step transitions predict that total positive work is minimized when the push-off and collision work are equal in magnitude. Here, we tested the importance of the coordination of push-off and collision work in determining transition work using ankle and knee joint braces to limit the ability of a leg to perform positive work on the body. To isolate transitions from other contributors to walking mechanics, participants were instructed to rock back and forth from one leg to the other, restricting motion to the sagittal plane and eliminating the need to swing the legs. We found that reduced push-off work increased the collision work required to complete the redirection of the COM velocity during each transition. A greater amount of total mechanical work was required when rocking departed from the predicted optimal coordination of step-to-step transitions, in which push-off and collision work are equal in magnitude. Our finding that transition work increases if one or both legs do not push-off with the optimal coordination may help explain the elevated metabolic cost of pathological gait irrespective of etiology. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Multi-step wrought processing of TiAl-based alloys

    International Nuclear Information System (INIS)

    Fuchs, G.E.

    1997-04-01

    Wrought processing will likely be needed for fabrication of a variety of TiAl-based alloy structural components. Laboratory and development work has usually relied on one-step forging to produce test material. Attempts to scale-up TiAl-based alloy processing has indicated that multi-step wrought processing is necessary. The purpose of this study was to examine potential multi-step processing routes, such as two-step isothermal forging and extrusion + isothermal forging. The effects of processing (I/M versus P/M), intermediate recrystallization heat treatments and processing route on the tensile and creep properties of Ti-48Al-2Nb-2Cr alloys were examined. The results of the testing were then compared to samples from the same heats of materials processed by one-step routes. Finally, by evaluating the effect of processing on microstructure and properties, optimized and potentially lower cost processing routes could be identified

  5. Diffusion coefficients for multi-step persistent random walks on lattices

    International Nuclear Information System (INIS)

    Gilbert, Thomas; Sanders, David P

    2010-01-01

    We calculate the diffusion coefficients of persistent random walks on lattices, where the direction of a walker at a given step depends on the memory of a certain number of previous steps. In particular, we describe a simple method which enables us to obtain explicit expressions for the diffusion coefficients of walks with a two-step memory on different classes of one-, two- and higher dimensional lattices.

  6. Fusion Kalman filtration with k-step delay sharing pattern

    Directory of Open Access Journals (Sweden)

    Duda Zdzisław

    2015-09-01

    Full Text Available A fusion hierarchical state filtration with k−step delay sharing pattern for a multisensor system is considered. A global state estimate depends on local state estimates determined by local nodes using local information. Local available information consists of local measurements and k−step delay global information - global estimate sent from a central node. Local estimates are transmitted to the central node to be fused. The synthesis of local and global filters is presented. It is shown that a fusion filtration with k−step delay sharing pattern is equivalent to the optimal centralized classical Kalman filtration when local measurements are transmitted to the center node and used to determine a global state estimate. It is proved that the k−step delay sharing pattern can reduce covariances of local state errors.

  7. Conventional treatment planning optimization using simulated annealing

    International Nuclear Information System (INIS)

    Morrill, S.M.; Langer, M.; Lane, R.G.

    1995-01-01

    Purpose: Simulated annealing (SA) allows for the implementation of realistic biological and clinical cost functions into treatment plan optimization. However, a drawback to the clinical implementation of SA optimization is that large numbers of beams appear in the final solution, some with insignificant weights, preventing the delivery of these optimized plans using conventional (limited to a few coplanar beams) radiation therapy. A preliminary study suggested two promising algorithms for restricting the number of beam weights. The purpose of this investigation was to compare these two algorithms using our current SA algorithm with the aim of producing a algorithm to allow clinically useful radiation therapy treatment planning optimization. Method: Our current SA algorithm, Variable Stepsize Generalized Simulated Annealing (VSGSA) was modified with two algorithms to restrict the number of beam weights in the final solution. The first algorithm selected combinations of a fixed number of beams from the complete solution space at each iterative step of the optimization process. The second reduced the allowed number of beams by a factor of two at periodic steps during the optimization process until only the specified number of beams remained. Results of optimization of beam weights and angles using these algorithms were compared using a standard cadre of abdominal cases. The solution space was defined as a set of 36 custom-shaped open and wedged-filtered fields at 10 deg. increments with a target constant target volume margin of 1.2 cm. For each case a clinically-accepted cost function, minimum tumor dose was maximized subject to a set of normal tissue binary dose-volume constraints. For this study, the optimized plan was restricted to four (4) fields suitable for delivery with conventional therapy equipment. Results: The table gives the mean value of the minimum target dose obtained for each algorithm averaged over 5 different runs and the comparable manual treatment

  8. Practical implementation of optimal management strategies in conservation programmes: a mate selection method

    Directory of Open Access Journals (Sweden)

    Fernández, J.

    2001-12-01

    Full Text Available The maintenance of genetic diversity is, from a genetic point of view, a key objective of conservation programmes. The selection of individuals contributing offspring and the decision of the mating scheme are the steps on which managers can control genetic diversity, specially on ‘ex situ’ programmes. Previous studies have shown that the optimal management strategy is to look for the parents’ contributions that yield minimum group coancestry (overall probability of identity by descent in the population and, then, to arrange mating couples following minimum pairwise coancestry. However, physiological constraints make it necessary to account for mating restrictions when deciding the contributions and, therefore, these should be implemented in a single step along with the mating plan. In the present paper, a single-step method is proposed to optimise the management of a conservation programme when restrictions on the mating scheme exist. The performance of the method is tested by computer simulation. The strategy turns out to be as efficient as the two-step method, regarding both the genetic diversity preserved and the fitness of the population.

  9. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea

    2014-10-31

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  10. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea; Guermond, Jean-Luc; Lee, Sanghyun

    2014-01-01

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  11. A multi-cycle optimization approach for low leakage in-core fuel management

    International Nuclear Information System (INIS)

    Cheng Pingdong; Shen Wei

    1999-01-01

    A new approach was developed to optimize pressurized waster reactor (PWR) low-leakage multi-cycle reload core design. The multi-cycle optimization process is carried out by the following three steps: The first step is a linear programming in search for an optimum power sharing distribution and optimum cycle length distribution for the successive several cycles to yield maximum multi-cycle total cycle length. In the second step, the fuel arrangement and burnable poison (BP) assignment are decoupled by using Haling power distribution and the optimum fuel arrangement is determined at the EOL in the absence of all BPs by employing a linear programming method or direct search method with objective function to force the calculated cycle length to be as close as possible to the optimum single cycle length obtained in the first step and with optimum power sharing distribution as additional constraints during optimization. In the third step, the BP assignment is optimized by the Flexible Tolerance Method (FTM) or linear programming method using the number of BP rods as control variable. The technology employed in the second and third steps was the usual decoupling method used in low-leakage core design. The first step was developed specially for multi-cycle optimization design and discussed in detail. Based on the proposed method a computer code MCYCO was encoded and tested for Qinshan Nuclear Power Plant (QNPP) low leakage reload core design. The multi-cycle optimization method developed, together with the program MCYCO, provides an applicable tool for solving the PWR low leakage reload core design problem

  12. Solving point reactor kinetic equations by time step-size adaptable numerical methods

    International Nuclear Information System (INIS)

    Liao Chaqing

    2007-01-01

    Based on the analysis of effects of time step-size on numerical solutions, this paper showed the necessity of step-size adaptation. Based on the relationship between error and step-size, two-step adaptation methods for solving initial value problems (IVPs) were introduced. They are Two-Step Method and Embedded Runge-Kutta Method. PRKEs were solved by implicit Euler method with step-sizes optimized by using Two-Step Method. It was observed that the control error has important influence on the step-size and the accuracy of solutions. With suitable control errors, the solutions of PRKEs computed by the above mentioned method are accurate reasonably. The accuracy and usage of MATLAB built-in ODE solvers ode23 and ode45, both of which adopt Runge-Kutta-Fehlberg method, were also studied and discussed. (authors)

  13. A novel optimization method, Gravitational Search Algorithm (GSA), for PWR core optimization

    International Nuclear Information System (INIS)

    Mahmoudi, S.M.; Aghaie, M.; Bahonar, M.; Poursalehi, N.

    2016-01-01

    Highlights: • The Gravitational Search Algorithm (GSA) is introduced. • The advantage of GSA is verified in Shekel’s Foxholes. • Reload optimizing in WWER-1000 and WWER-440 cases are performed. • Maximizing K eff , minimizing PPFs and flattening power density is considered. - Abstract: In-core fuel management optimization (ICFMO) is one of the most challenging concepts of nuclear engineering. In recent decades several meta-heuristic algorithms or computational intelligence methods have been expanded to optimize reactor core loading pattern. This paper presents a new method of using Gravitational Search Algorithm (GSA) for in-core fuel management optimization. The GSA is constructed based on the law of gravity and the notion of mass interactions. It uses the theory of Newtonian physics and searcher agents are the collection of masses. In this work, at the first step, GSA method is compared with other meta-heuristic algorithms on Shekel’s Foxholes problem. In the second step for finding the best core, the GSA algorithm has been performed for three PWR test cases including WWER-1000 and WWER-440 reactors. In these cases, Multi objective optimizations with the following goals are considered, increment of multiplication factor (K eff ), decrement of power peaking factor (PPF) and power density flattening. It is notable that for neutronic calculation, PARCS (Purdue Advanced Reactor Core Simulator) code is used. The results demonstrate that GSA algorithm have promising performance and could be proposed for other optimization problems of nuclear engineering field.

  14. Effects of Sheet Resistance on mc-Si Selective Emitter Solar Cells Using Laser Opening and One-Step Diffusion

    Directory of Open Access Journals (Sweden)

    Sheng-Shih Wang

    2015-01-01

    Full Text Available In order to simplify process procedure and improve conversion efficiency (η, we present new steps of laser opening and one-step POCl3 diffusion to fabricate selective emitter (SE solar cells, in which heavily doped regions (HDR and lightly doped regions (LDR were formed simultaneously. For HDR, we divided six cells into two groups for POCl3 diffusion with sheet resistance (RS of 40 Ω/sq (for group A and 50 Ω/sq (for group B. The dry oxidation duration at a temperature of 850°C was 18, 25, and 35 min for the 3 different cells in each group. This created six SE samples with different RS pairings for the HDR and LDR. The optimal cell (sample SE2 with RS values of 40/81 Ω/Sq in HDR/LDR showed the best η of 16.20%, open circuit voltage (VOC of 612.52 mV, and fill factor (FF of 75.83%. The improvement ratios are 1.57% for η and 14.32% for external quantum efficiency (EQE as compared with those of the two-step diffusion process of our previous study. Moreover, the one-step laser opening process and omitting the step of removing the damage caused by laser ablation especially reduce chemistry pollution, thus showing ecofriendly process for use in industrial-scale production.

  15. Designed optimization of a single-step extraction of fucose-containing sulfated polysaccharides from Sargassum sp

    DEFF Research Database (Denmark)

    Ale, Marcel Tutor; Mikkelsen, Jørn Dalgaard; Meyer, Anne S.

    2012-01-01

    Fucose-containing sulfated polysaccharides can be extracted from the brown seaweed, Sargassum sp. It has been reported that fucose-rich sulfated polysaccharides from brown seaweeds exert different beneficial biological activities including anti-inflammatory, anticoagulant, and anti-viral effects....... Classical extraction of fucose-containing sulfated polysaccharides from brown seaweed species typically involves extended, multiple-step, hot acid, or CaCl2 treatments, each step lasting several hours. In this work, we systematically examined the influence of acid concentration (HCl), time, and temperature...... on the yield of fucosecontaining sulfated polysaccharides (FCSPs) in statistically designed two-step and single-step multifactorial extraction experiments. All extraction factors had significant effects on the fucose-containing sulfated polysaccharides yield, with the temperature and time exerting positive...

  16. Taking the First Step towards Entrenching Mental Health in the ...

    African Journals Online (AJOL)

    Taking the First Step towards Entrenching Mental Health in the Workplace: ... of optimal employee mental health to sustainable human capital development in the ... can be mobilized to promote the entrenchment of workplace mental health.

  17. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max

    2016-11-25

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  18. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max; Grote, Marcus; Peter, Daniel; Schenk, Olaf

    2016-01-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  19. Newmark local time stepping on high-performance computing architectures

    Energy Technology Data Exchange (ETDEWEB)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Grote, Marcus, E-mail: marcus.grote@unibas.ch [Department of Mathematics and Computer Science, University of Basel (Switzerland); Peter, Daniel, E-mail: daniel.peter@kaust.edu.sa [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Schenk, Olaf, E-mail: olaf.schenk@usi.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland)

    2017-04-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  20. [Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].

    Science.gov (United States)

    Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing

    2003-12-01

    Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.

  1. Fuel management optimization for a PWR

    International Nuclear Information System (INIS)

    Dumas, M.; Robeau, D.

    1981-04-01

    This study is aimed to optimize the refueling pattern of a PWR. Two methods are developed, they are based on a linearized form of the optimization problem. The first method determines a feasible solution in two steps; in the first one the original problem is replaced by a relaxed one which is solved by the Method of Approximation Programming. The second step is based on the Branch and Bound method to find the feasible solution closest to the solution obtained in the first step. The second method starts from a given refueling pattern and tries to improve this pattern by the calculation of the effects of 2 by 2, 3 by 3 and 4 by 4 permutations on the objective function. Numerical results are given for a typical PWR refueling using the two methods

  2. Optimal design of the cable metro with unified intermediate supports

    Directory of Open Access Journals (Sweden)

    Lagerev A.V.

    2017-12-01

    Full Text Available In article was formulated and solved the problem of conditional nonlinear technical and economic optimization of the distance between intermediate supports, uniform in height, during the design of the cable metro lines in highly urban-ized city environment. The optimization problem involves a single-criterion objective function that expresses the cost of construction of the cable metro line (total cost of intermediate supports and their foundations, traction and carrying steel cables and technical equipment. The specified objective function subject to minimization by finding the optimal combination of the distance between intermediate supports and tension carrying ropes with accounting constructive, modal, structural and planning constraints in the form of nonlinear inequalities. The optimization algorithm was based on the direct method of optimization type, Hooke-Jeeves, which was modified taking into account the need of varying the height of intermediate supports with a constant step equal to the step of unification. When constructing the objective function were considered three possible forms sagging of carrying ropes, which can be implemented for various values of the efforts of their tension. Analysis was done of the influence of the step unification and minimum size of interme-diate supports on their optimum step, the cost of intermediate supports, the cost of 1 km cable metro line for different values of the angle of the longitudinal slope of the surface relief along the cable metro line. The graph of height of uni-fied supports from the angle of the longitudinal slope of the surface relief has discrete-step type. With the increase of the step unify the discreteness increases: the width of the range of angles of the longitudinal slope of the surface relief within which the height of the supports remains constant, increases. The graph of step installation of unified supports along the cable metro line from the angle of the longitudinal

  3. Photon attenuation correction technique in SPECT based on nonlinear optimization

    International Nuclear Information System (INIS)

    Suzuki, Shigehito; Wakabayashi, Misato; Okuyama, Keiichi; Kuwamura, Susumu

    1998-01-01

    Photon attenuation correction in SPECT was made using a nonlinear optimization theory, in which an optimum image is searched so that the sum of square errors between observed and reprojected projection data is minimized. This correction technique consists of optimization and step-width algorithms, which determine at each iteration a pixel-by-pixel directional value of search and its step-width, respectively. We used the conjugate gradient and quasi-Newton methods as the optimization algorithm, and Curry rule and the quadratic function method as the step-width algorithm. Statistical fluctuations in the corrected image due to statistical noise in the emission projection data grew as the iteration increased, depending on the combination of optimization and step-width algorithms. To suppress them, smoothing for directional values was introduced. Computer experiments and clinical applications showed a pronounced reduction in statistical fluctuations of the corrected image for all combinations. Combinations using the conjugate gradient method were superior in noise characteristic and computation time. The use of that method with the quadratic function method was optimum if noise property was regarded as important. (author)

  4. Efficient One-Step Fusion PCR Based on Dual-Asymmetric Primers and Two-Step Annealing

    DEFF Research Database (Denmark)

    Liu, Yilan; Chen, Jinjin; Thygesen, Anders

    2018-01-01

    Gene splicing by fusion PCR is a versatile and widely used methodology, especially in synthetic biology. We here describe a rapid method for splicing two fragments by one-round fusion PCR with a dual-asymmetric primers and two-step annealing (ODT) method. During the process, the asymmetric...... intermediate fragments were generated in the early stage. Thereafter, they were hybridized in the subsequent cycles to serve as template for the target full-length product. The process parameters such as primer ratio, elongation temperature and cycle numbers were optimized. In addition, the fusion products...

  5. A Step-by-Step Framework on Discrete Events Simulation in Emergency Department; A Systematic Review.

    Science.gov (United States)

    Dehghani, Mahsa; Moftian, Nazila; Rezaei-Hachesu, Peyman; Samad-Soltani, Taha

    2017-04-01

    To systematically review the current literature of simulation in healthcare including the structured steps in the emergency healthcare sector by proposing a framework for simulation in the emergency department. For the purpose of collecting the data, PubMed and ACM databases were used between the years 2003 and 2013. The inclusion criteria were to select English-written articles available in full text with the closest objectives from among a total of 54 articles retrieved from the databases. Subsequently, 11 articles were selected for further analysis. The studies focused on the reduction of waiting time and patient stay, optimization of resources allocation, creation of crisis and maximum demand scenarios, identification of overcrowding bottlenecks, investigation of the impact of other systems on the existing system, and improvement of the system operations and functions. Subsequently, 10 simulation steps were derived from the relevant studies after an expert's evaluation. The 10-steps approach proposed on the basis of the selected studies provides simulation and planning specialists with a structured method for both analyzing problems and choosing best-case scenarios. Moreover, following this framework systematically enables the development of design processes as well as software implementation of simulation problems.

  6. Optimization of Interior Permanent Magnet Motor by Quality Engineering and Multivariate Analysis

    Science.gov (United States)

    Okada, Yukihiro; Kawase, Yoshihiro

    This paper has described the method of optimization based on the finite element method. The quality engineering and the multivariable analysis are used as the optimization technique. This optimizing method consists of two steps. At Step.1, the influence of parameters for output is obtained quantitatively, at Step.2, the number of calculation by the FEM can be cut down. That is, the optimal combination of the design parameters, which satisfies the required characteristic, can be searched for efficiently. In addition, this method is applied to a design of IPM motor to reduce the torque ripple. The final shape can maintain average torque and cut down the torque ripple 65%. Furthermore, the amount of permanent magnets can be reduced.

  7. Optimization of single channel glazed photovoltaic thermal (PVT) array using Evolutionary Algorithm (EA) and carbon credit earned by the optimized array

    International Nuclear Information System (INIS)

    Singh, Sonveer; Agrawal, Sanjay; Gadh, Rajit

    2015-01-01

    Highlights: • Optimization of SCGPVT array using Evolutionary Algorithm. • The overall exergy gain is maximized with an Evolutionary Algorithm. • Annual Performance has been evaluated for New Delhi (India). • There are improvement in results than the model given in literature. • Carbon credit analysis has been done. - Abstract: In this paper, work is carried out in three steps. In the first step, optimization of single channel glazed photovoltaic thermal (SCGPVT) array has been done with an Evolutionary Algorithm (EA) keeping the overall exergy gain is an objective function of the SCGPVT array. For maximization of overall exergy gain, total seven design variables have been optimized such as length of the channel (L), mass flow rate of flowing fluid (m_F), velocity of flowing fluid (V_F), convective heat transfer coefficient through the tedlar (U_T), overall heat transfer coefficient between solar cell to ambient through glass cover (U_S_C_A_G), overall back loss heat transfer coefficient from flowing fluid to ambient (U_F_A) and convective heat transfer coefficient of tedlar (h_T). It has been observed that the instant overall exergy gain obtained from optimized system is 1.42 kW h, which is 87.86% more than the overall exergy gain of a un-optimized system given in literature. In the second step, overall exergy gain and overall thermal gain of SCGPVT array has been evaluated annually and there are 69.52% and 88.05% improvement in annual overall exergy gain and annual overall thermal gain respectively than the un-optimized system for the same input irradiance and ambient temperature. In the third step, carbon credit earned by the optimized SCGPVT array has also been evaluated as per norms of Kyoto Protocol Bangalore climatic conditions.

  8. A Generic Methodology for Superstructure Optimization of Different Processing Networks

    DEFF Research Database (Denmark)

    Bertran, Maria-Ona; Frauzem, Rebecca; Zhang, Lei

    2016-01-01

    In this paper, we propose a generic computer-aided methodology for synthesis of different processing networks using superstructure optimization. The methodology can handle different network optimization problems of various application fields. It integrates databases with a common data architecture......, a generic model to represent the processing steps, and appropriate optimization tools. A special software interface has been created to automate the steps in the methodology workflow, allow the transfer of data between tools and obtain the mathematical representation of the problem as required...

  9. A New Method for Horizontal Axis Wind Turbine (HAWT Blade Optimization

    Directory of Open Access Journals (Sweden)

    Mohammadreza Mohammadi

    2016-02-01

    Full Text Available Iran has a great potential for wind energy. This paper introduces optimization of 7 wind turbine blades for small and medium scales in a determined wind condition of Zabol site, Iran, where the average wind speed is considered 7 m /s. Considered wind turbines are 3 bladed and radius of 7 case study turbine blades are 4.5 m, 6.5 m, 8 m, 9 m, 10 m, 15.5 m and 20 m. As the first step, an initial design is performed using one airfoil (NACA 63-215 across the blade. In the next step, every blade is divided into three sections, while the 20 % of first part of the blade is considered as root, the 5% of last the part is considered as tip and the rest of the blade as mid part. Providing necessary input data, suitable airfoils for wind turbines including 43 airfoils are extracted and their experimental data are entered in optimization process. Three variables in this optimization problem would be airfoil type, attack angle and chord, where the objective function is maximum output torque. A MATLAB code was written for design and optimization of the blade, which was validated with a previous experimental work. In addition, a comparison was made to show the effect of optimization with two variables (airfoil type and attack angle versus optimization with three variables (airfoil type, attack angle and chord on output torque increase. Results of this research shows a dramatic increase in comparison to initial designed blade with one airfoil where two variable optimization causes 7.7% to 22.27 % enhancement and three variable optimization causes 17.91% up to 24.48% rise in output torque .Article History: Received Oct 15, 2015; Received in revised form January 2, 2016; Accepted January 14, 2016; Available online How to Cite This Article: Mohammadi, M., Mohammadi, A. and Farahat, S. (2016 A New Method for Horizontal Axis Wind Turbine (HAWT Blade Optimization. Int. Journal of Renewable Energy Development, 5(1,1-8. http://dx.doi.org/10.14710/ijred.5.1.1-8

  10. Transportation package design using numerical optimization

    International Nuclear Information System (INIS)

    Harding, D.C.; Witkowski, W.R.

    1992-01-01

    The design of structures and engineering systems has always been an iterative process whose complexity was dependent upon the boundary conditions, constraints and available analytical tools. Transportation packaging design is no exception with structural, thermal and radiation shielding constraints based on regulatory hypothetical accident conditions. Transportation packaging design is often accomplished by a group of specialists, each designing a single component based on one or more simple criteria, pooling results with the group, evaluating the open-quotes pooledclose quotes design, and then reiterating the entire process until a satisfactory design is reached. The manual iterative methods used by the designer/analyst can be summarized in the following steps: design the part, analyze the part, interpret the analysis results, modify the part, and re-analyze the part. The inefficiency of this design practice and the frequently conservative result suggests the need for a more structured design methodology, which can simultaneously consider all of the design constraints. Numerical optimization is a structured design methodology whose maturity in development has allowed it to become a primary design tool in many industries. The purpose of this overview is twofold: first, to outline the theory and basic elements of numerical optimization; and second, to show how numerical optimization can be applied to the transportation packaging industry and used to increase efficiency and safety of radioactive and hazardous material transportation packages. A more extensive review of numerical optimization and its applications to radioactive material transportation package design was performed previously by the authors (Witkowski and Harding 1992). A proof-of-concept Type B package design is also presented as a simplified example of potential improvements achievable using numerical optimization in the design process

  11. Product analysis illuminates the final steps of IES deletion in Tetrahymena thermophila.

    Science.gov (United States)

    Saveliev, S V; Cox, M M

    2001-06-15

    DNA sequences (IES elements) eliminated from the developing macronucleus in the ciliate Tetrahymena thermophila are released as linear fragments, which have now been detected and isolated. A PCR-mediated examination of fragment end structures reveals three types of strand scission events, reflecting three steps in the deletion process. New evidence is provided for two steps proposed previously: an initiating double-stranded cleavage, and strand transfer to create a branched deletion intermediate. The fragment ends provide evidence for a previously uncharacterized third step: the branched DNA strand is cleaved at one of several defined sites located within 15-16 nucleotides of the IES boundary, liberating the deleted DNA in a linear form.

  12. Some optimizations of the animal code

    International Nuclear Information System (INIS)

    Fletcher, W.T.

    1975-01-01

    Optimizing techniques were performed on a version of the ANIMAL code (MALAD1B) at the source-code (FORTRAN) level. Sample optimizing techniques and operations used in MALADOP--the optimized version of the code--are presented, along with a critique of some standard CDC 7600 optimizing techniques. The statistical analysis of total CPU time required for MALADOP and MALAD1B shows a run-time saving of 174 msec (almost 3 percent) in the code MALADOP during one time step

  13. Kalman Filtering for Discrete Stochastic Systems with Multiplicative Noises and Random Two-Step Sensor Delays

    Directory of Open Access Journals (Sweden)

    Dongyan Chen

    2015-01-01

    Full Text Available This paper is concerned with the optimal Kalman filtering problem for a class of discrete stochastic systems with multiplicative noises and random two-step sensor delays. Three Bernoulli distributed random variables with known conditional probabilities are introduced to characterize the phenomena of the random two-step sensor delays which may happen during the data transmission. By using the state augmentation approach and innovation analysis technique, an optimal Kalman filter is constructed for the augmented system in the sense of the minimum mean square error (MMSE. Subsequently, the optimal Kalman filtering is derived for corresponding augmented system in initial instants. Finally, a simulation example is provided to demonstrate the feasibility and effectiveness of the proposed filtering method.

  14. Optimal planning of integrated multi-energy systems

    DEFF Research Database (Denmark)

    van Beuzekom, I.; Gibescu, M.; Pinson, Pierre

    2017-01-01

    In this paper, a mathematical approach for the optimal planning of integrated energy systems is proposed. In order to address the challenges of future, RES-dominated energy systems, the model deliberates between the expansion of traditional energy infrastructures, the integration...... and sustainability goals for 2030 and 2045. Optimal green- and brownfield designs for a district's future integrated energy system are compared using a one-step, as well as a two-step planning approach. As expected, the greenfield designs are more cost efficient, as their results are not constrained by the existing...

  15. Step out - Step in Sequencing Games

    NARCIS (Netherlands)

    Musegaas, M.; Borm, P.E.M.; Quant, M.

    2014-01-01

    In this paper a new class of relaxed sequencing games is introduced: the class of Step out - Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order.

  16. Vector-model-supported approach in prostate plan optimization

    International Nuclear Information System (INIS)

    Liu, Eva Sau Fan; Wu, Vincent Wing Cheung; Harris, Benjamin; Lehman, Margot; Pryor, David; Chan, Lawrence Wing Chi

    2017-01-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  17. Vector-model-supported approach in prostate plan optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Eva Sau Fan [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Wu, Vincent Wing Cheung [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Harris, Benjamin [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Lehman, Margot; Pryor, David [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); School of Medicine, University of Queensland (Australia); Chan, Lawrence Wing Chi, E-mail: wing.chi.chan@polyu.edu.hk [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong)

    2017-07-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  18. Market-Based and System-Wide Fuel Cycle Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Paul Philip Hood [Univ. of Wisconsin, Madison, WI (United States); Scopatz, Anthony [Univ. of South Carolina, Columbia, SC (United States); Gidden, Matthew [Univ. of Wisconsin, Madison, WI (United States); Carlsen, Robert [Univ. of Wisconsin, Madison, WI (United States); Mouginot, Baptiste [Univ. of Wisconsin, Madison, WI (United States); Flanagan, Robert [Univ. of South Carolina, Columbia, SC (United States)

    2017-06-13

    This work introduces automated optimization into fuel cycle simulations in the Cyclus platform. This includes system-level optimizations, seeking a deployment plan that optimizes the performance over the entire transition, and market-level optimization, seeking an optimal set of material trades at each time step. These concepts were introduced in a way that preserves the flexibility of the Cyclus fuel cycle framework, one of its most important design principles.

  19. Market-Based and System-Wide Fuel Cycle Optimization

    International Nuclear Information System (INIS)

    Wilson, Paul Philip Hood; Scopatz, Anthony; Gidden, Matthew; Carlsen, Robert; Mouginot, Baptiste; Flanagan, Robert

    2017-01-01

    This work introduces automated optimization into fuel cycle simulations in the Cyclus platform. This includes system-level optimizations, seeking a deployment plan that optimizes the performance over the entire transition, and market-level optimization, seeking an optimal set of material trades at each time step. These concepts were introduced in a way that preserves the flexibility of the Cyclus fuel cycle framework, one of its most important design principles.

  20. Step out-step in sequencing games

    NARCIS (Netherlands)

    Musegaas, Marieke; Borm, Peter; Quant, Marieke

    2015-01-01

    In this paper a new class of relaxed sequencing games is introduced: the class of Step out–Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order. First,

  1. Optimal lag in dynamical investments

    OpenAIRE

    Serva, M.

    1998-01-01

    A portfolio of different stocks and a risk-less security whose composition is dynamically maintained stable by trading shares at any time step leads to a growth of the capital with a nonrandom rate. This is the key for the theory of optimal-growth investment formulated by Kelly. In presence of transaction costs, the optimal composition changes and, more important, it turns out that the frequency of transactions must be reduced. This simple observation leads to the definition of an optimal lag...

  2. Stepped-to-dart Leaders in Cloud-to-ground Lightning

    Science.gov (United States)

    Stolzenburg, M.; Marshall, T. C.; Karunarathne, S.; Karunarathna, N.; Warner, T.; Orville, R. E.

    2013-12-01

    Using time-correlated high-speed video (50,000 frames per second) and fast electric field change (5 MegaSamples per second) data for lightning flashes in East-central Florida, we describe an apparently rare type of subsequent leader: a stepped leader that finds and follows a previously used channel. The observed 'stepped-to-dart leaders' occur in three natural negative ground flashes. Stepped-to-dart leader connection altitudes are 3.3, 1.6 and 0.7 km above ground in the three cases. Prior to the stepped-to-dart connection, the advancing leaders have properties typical of stepped leaders. After the connection, the behavior changes almost immediately (within 40-60 us) to dart or dart-stepped leader, with larger amplitude E-change pulses and faster average propagation speeds. In this presentation, we will also describe the upward luminosity after the connection in the prior return stroke channel and in the stepped leader path, along with properties of the return strokes and other leaders in the three flashes.

  3. Stepping motors a guide to theory and practice

    CERN Document Server

    Acarnely, Paul

    2002-01-01

    This book provides an introductory text which will enable the reader to both appreciate the essential characteristics of stepping motor systems and understand how these characteristics are being exploited in the continuing development of new motors, drives and controllers. A basic theoretical approach relating to the more significant aspects of performance is presented, although it is assumed throughout that the reader has no previous experience of electrical machines and is primarily interested in the applications of stepping motors.

  4. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Arumugam, Kamesh [Old Dominion Univ., Norfolk, VA (United States)

    2017-05-01

    Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore, these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address

  5. Exploring chemical reaction mechanisms through harmonic Fourier beads path optimization.

    Science.gov (United States)

    Khavrutskii, Ilja V; Smith, Jason B; Wallqvist, Anders

    2013-10-28

    Here, we apply the harmonic Fourier beads (HFB) path optimization method to study chemical reactions involving covalent bond breaking and forming on quantum mechanical (QM) and hybrid QM∕molecular mechanical (QM∕MM) potential energy surfaces. To improve efficiency of the path optimization on such computationally demanding potentials, we combined HFB with conjugate gradient (CG) optimization. The combined CG-HFB method was used to study two biologically relevant reactions, namely, L- to D-alanine amino acid inversion and alcohol acylation by amides. The optimized paths revealed several unexpected reaction steps in the gas phase. For example, on the B3LYP∕6-31G(d,p) potential, we found that alanine inversion proceeded via previously unknown intermediates, 2-iminopropane-1,1-diol and 3-amino-3-methyloxiran-2-ol. The CG-HFB method accurately located transition states, aiding in the interpretation of complex reaction mechanisms. Thus, on the B3LYP∕6-31G(d,p) potential, the gas phase activation barriers for the inversion and acylation reactions were 50.5 and 39.9 kcal∕mol, respectively. These barriers determine the spontaneous loss of amino acid chirality and cleavage of peptide bonds in proteins. We conclude that the combined CG-HFB method further advances QM and QM∕MM studies of reaction mechanisms.

  6. A two-step method for developing a control rod program for boiling water reactors

    International Nuclear Information System (INIS)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1992-01-01

    This paper reports on a two-step method that is established for the generation of a long-term control rod program for boiling water reactors (BWRs). The new method assumes a time-variant target power distribution in core depletion. In the new method, the BWR control rod programming is divided into two steps. In step 1, a sequence of optimal, exposure-dependent Haling power distribution profiles is generated, utilizing the spectral shift concept. In step 2, a set of exposure-dependent control rod patterns is developed by using the Haling profiles generated at step 1 as a target. The new method is implemented in a computer program named OCTOPUS. The optimization procedure of OCTOPUS is based on the method of approximation programming, in which the SIMULATE-E code is used to determine the nucleonics characteristics of the reactor core state. In a test in cycle length over a time-invariant, target Haling power distribution case because of a moderate application of spectral shift. No thermal limits of the core were violated. The gain in cycle length could be increased further by broadening the extent of the spetral shift

  7. Maximizing Efficiency in Two-step Solar-thermochemical Fuel Production

    Energy Technology Data Exchange (ETDEWEB)

    Ermanoski, I. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-05-01

    Widespread solar fuel production depends on its economic viability, largely driven by the solar-to-fuel conversion efficiency. In this paper, the material and energy requirements in two-step solar-thermochemical cycles are considered.The need for advanced redox active materials is demonstrated, by considering the oxide mass flow requirements at a large scale. Two approaches are also identified for maximizing the efficiency: optimizing reaction temperatures, and minimizing the pressure in the thermal reduction step by staged thermal reduction. The results show that each approach individually, and especially the two in conjunction, result in significant efficiency gains.

  8. Effects of walking speed on the step-by-step control of step width.

    Science.gov (United States)

    Stimpson, Katy H; Heitkamp, Lauren N; Horne, Joscelyn S; Dean, Jesse C

    2018-02-08

    Young, healthy adults walking at typical preferred speeds use step-by-step adjustments of step width to appropriately redirect their center of mass motion and ensure mediolateral stability. However, it is presently unclear whether this control strategy is retained when walking at the slower speeds preferred by many clinical populations. We investigated whether the typical stabilization strategy is influenced by walking speed. Twelve young, neurologically intact participants walked on a treadmill at a range of prescribed speeds (0.2-1.2 m/s). The mediolateral stabilization strategy was quantified as the proportion of step width variance predicted by the mechanical state of the pelvis throughout a step (calculated as R 2 magnitude from a multiple linear regression). Our ability to accurately predict the upcoming step width increased over the course of a step. The strength of the relationship between step width and pelvis mechanics at the start of a step was reduced at slower speeds. However, these speed-dependent differences largely disappeared by the end of a step, other than at the slowest walking speed (0.2 m/s). These results suggest that mechanics-dependent adjustments in step width are a consistent component of healthy gait across speeds and contexts. However, slower walking speeds may ease this control by allowing mediolateral repositioning of the swing leg to occur later in a step, thus encouraging slower walking among clinical populations with limited sensorimotor control. Published by Elsevier Ltd.

  9. Overcoming the hurdles of multi-step targeting (MST) for effective radioimmunotherapy of solid tumors

    International Nuclear Information System (INIS)

    Larson, Steven M.; Cheung, Nai-Kong

    2009-01-01

    The 4 specific aims of this project are: (1) Optimization of MST to increase tumor uptake; (2) Antigen heterogeneity; (3) Characterization and reduction of renal uptake; and (4) Validation in vivo of optimized MST targeted therapy. This proposal focussed upon optimizing multistep immune targeting strategies for the treatment of cancer. Two multi-step targeting constructs were explored during this funding period: (1) anti-Tag-72 and (2) anti-GD2.

  10. Particle swarm optimizer for weighting factor selection in intensity-modulated radiation therapy optimization algorithms.

    Science.gov (United States)

    Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo

    2017-01-01

    In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human

  11. Optimal Nonlinear Filter for INS Alignment

    Institute of Scientific and Technical Information of China (English)

    赵瑞; 顾启泰

    2002-01-01

    All the methods to handle the inertial navigation system (INS) alignment were sub-optimal in the past. In this paper, particle filtering (PF) as an optimal method is used for solving the problem of INS alignment. A sub-optimal two-step filtering algorithm is presented to improve the real-time performance of PF. The approach combines particle filtering with Kalman filtering (KF). Simulation results illustrate the superior performance of these approaches when compared with extended Kalman filtering (EKF).

  12. The importance of personality and parental styles on optimism in adolescents.

    Science.gov (United States)

    Zanon, Cristian; Bastianello, Micheline Roat; Pacico, Juliana Cerentini; Hutz, Claudio Simon

    2014-01-01

    Some studies have suggested that personality factors are important to optimism development. Others have emphasized that family relations are relevant variables to optimism. This study aimed to evaluate the importance of parenting styles to optimism controlling for the variance accounted for by personality factors. Participants were 344 Brazilian high school students (44% male) with mean age of 16.2 years (SD = 1) who answered personality, optimism, responsiveness and demandingness scales. Hierarchical regression analyses were conducted having personality factors (in the first step) and maternal and paternal parenting styles, and demandingness and responsiveness (in the second step) as predictive variables and optimism as the criterion. Personality factors, especially neuroticism (β = -.34, p parental styles (1%). These findings suggest that personality is more important to optimism development than parental styles.

  13. Risk Factors for Asthma Exacerbation and Treatment Failure in Adults and Adolescents with Well-Controlled Asthma during Continuation and Step Down Therapy.

    Science.gov (United States)

    DiMango, Emily; Rogers, Linda; Reibman, Joan; Gerald, Lynn B; Brown, Mark; Sugar, Elizabeth A; Henderson, Robert; Holbrook, Janet T

    2018-06-04

    Although national and international guidelines recommend reduction of asthma controller therapy or 'step-down" therapy in patients with well controlled asthma, it is expected that some individuals may experience worsening of asthma symptoms or asthma exacerbations during step-down. Characteristics associated with subsequent exacerbations during step-down therapy have not been well defined. The effect of environmental tobacco smoke (ETS) exposure on risk of treatment failure during asthma step down therapy has not been reported. To identify baseline characteristics associated with treatment failure and asthma exacerbation during maintenance and guideline-based step-down therapy. The present analysis uses data collected from a completed randomized controlled trial of optimal step-down therapy in patients with well controlled asthma taking moderate dose combination inhaled corticosteroids/long acting beta agonists. Participants were 12 years or older with physician diagnosed asthma and were enrolled between December 2011 and May 2014. An Emergency Room visit in the previous year was predictive of a subsequent treatment failure (HR 1.53 (1.06, 2.21 CI). For every 10% increase in baseline forced expiratory volume in one second percent predicted, the hazard for treatment failure was reduced by 14% (95% CI: 0.74-0.99). There was no difference in risk of treatment failure between adults and children, nor did duration of asthma increase risk of treatment failure. Age of asthma onset was not associated with increased risk of treatment failure. Unexpected emergency room visit in the previous year was the only risk factor significantly associated with subsequent asthma exacerbations requiring systemic corticosteroids. Time to treatment failure or exacerbation did not differ in participants with and without self-report of ETS exposure. The present findings can help clinicians identify patients more likely to develop treatment failures and exacerbations and who may therefore

  14. STRUCTURAL OPTIMIZATION OF FUNCTIONALLY GRADED MATERIALS WITH SMALL CONCENTRATION OF INCLUSIONS

    Directory of Open Access Journals (Sweden)

    DISKOVSKY A. A.

    2017-01-01

    Full Text Available Raising of problem.With an optimal design of inner structure of functionally graded material (FGM based on the classical method of homogenization procedure, in cases of low concentration of inclusions, when the size of inclusions is essentially less than the distance between them, leads to computational difficulties. Purpose – the research to develop a homogenization procedure, allowing solving effectively the problem of optimizing the internal structure of FGM at low concentrations of inclusions and illustration with specific examples. Conclusion. The proposed method allows solving tasks of calculation and optimal design of the internal structure of FGM structures with variable inclusions and with a variable step between them using the same methodology. The optimization is performed using two mechanisms. The first allocation is fixed at the edges of the border areas in which inclusions are absent. The second optimization mechanism is the distribution of inclusions sizes under the law, coinciding with the distribution law of an external load. Alternate step for the step should be reduced in areas with greater intensity of the external load.

  15. Verification of a CT scanner using a miniature step gauge

    DEFF Research Database (Denmark)

    Cantatore, Angela; Andreasen, J.L.; Carmignato, S.

    2011-01-01

    The work deals with performance verification of a CT scanner using a 42mm miniature replica step gauge developed for optical scanner verification. Errors quantification and optimization of CT system set-up in terms of resolution and measurement accuracy are fundamental for use of CT scanning...

  16. One-Step-Ahead Predictive Control for Hydroturbine Governor

    Directory of Open Access Journals (Sweden)

    Zhihuai Xiao

    2015-01-01

    Full Text Available The hydroturbine generator regulating system can be considered as one system synthetically integrating water, machine, and electricity. It is a complex and nonlinear system, and its configuration and parameters are time-dependent. A one-step-ahead predictive control based on on-line trained neural networks (NNs for hydroturbine governor with variation in gate position is described in this paper. The proposed control algorithm consists of a one-step-ahead neuropredictor that tracks the dynamic characteristics of the plant and predicts its output and a neurocontroller to generate the optimal control signal. The weights of two NNs, initially trained off-line, are updated on-line according to the scalar error. The proposed controller can thus track operating conditions in real-time and produce the optimal control signal over the wide operating range. Only the inputs and outputs of the generator are measured and there is no need to determine the other states of the generator. Simulations have been performed with varying operating conditions and different disturbances to compare the performance of the proposed controller with that of a conventional PID controller and validate the feasibility of the proposed approach.

  17. Free Modal Algebras Revisited: The Step-by-Step Method

    NARCIS (Netherlands)

    Bezhanishvili, N.; Ghilardi, Silvio; Jibladze, Mamuka

    2012-01-01

    We review the step-by-step method of constructing finitely generated free modal algebras. First we discuss the global step-by-step method, which works well for rank one modal logics. Next we refine the global step-by-step method to obtain the local step-by-step method, which is applicable beyond

  18. Optimization of cryoprotectant loading into murine and human oocytes.

    Science.gov (United States)

    Karlsson, Jens O M; Szurek, Edyta A; Higgins, Adam Z; Lee, Sang R; Eroglu, Ali

    2014-02-01

    Loading of cryoprotectants into oocytes is an important step of the cryopreservation process, in which the cells are exposed to potentially damaging osmotic stresses and chemical toxicity. Thus, we investigated the use of physics-based mathematical optimization to guide design of cryoprotectant loading methods for mouse and human oocytes. We first examined loading of 1.5 M dimethyl sulfoxide (Me(2)SO) into mouse oocytes at 23°C. Conventional one-step loading resulted in rates of fertilization (34%) and embryonic development (60%) that were significantly lower than those of untreated controls (95% and 94%, respectively). In contrast, the mathematically optimized two-step method yielded much higher rates of fertilization (85%) and development (87%). To examine the causes for oocyte damage, we performed experiments to separate the effects of cell shrinkage and Me(2)SO exposure time, revealing that neither shrinkage nor Me(2)SO exposure single-handedly impairs the fertilization and development rates. Thus, damage during one-step Me(2)SO addition appears to result from interactions between the effects of Me(2)SO toxicity and osmotic stress. We also investigated Me(2)SO loading into mouse oocytes at 30°C. At this temperature, fertilization rates were again lower after one-step loading (8%) in comparison to mathematically optimized two-step loading (86%) and untreated controls (96%). Furthermore, our computer algorithm generated an effective strategy for reducing Me(2)SO exposure time, using hypotonic diluents for cryoprotectant solutions. With this technique, 1.5 M Me(2)SO was successfully loaded in only 2.5 min, with 92% fertilizability. Based on these promising results, we propose new methods to load cryoprotectants into human oocytes, designed using our mathematical optimization approach. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. SYSTEMATIZATION OF THE BASIC STEPS OF THE STEP-AEROBICS

    Directory of Open Access Journals (Sweden)

    Darinka Korovljev

    2011-03-01

    Full Text Available Following the development of the powerful sport industry, in front of us appeared a lot of new opportunities for creating of the new programmes of exercising with certain requisites. One of such programmes is certainly step-aerobics. Step-aerobics can be defined as a type of aerobics consisting of the basic aerobic steps (basic steps applied in exercising on stepper (step bench, with a possibility to regulate its height. Step-aerobics itself can be divided into several groups, depending on the following: type of music, working methods and adopted knowledge of the attendants. In this work, the systematization of the basic steps in step-aerobics was made on the basis of the following criteria: steps origin, number of leg motions in stepping and relating the body support at the end of the step. Systematization of the basic steps of the step-aerobics is quite significant for making a concrete review of the existing basic steps, thus making creation of the step-aerobics lesson easier

  20. PERFORMANCE & ANALYSIS AND OPTIMIZATION OF STEPPED TYPE SOLAR STILL (A REVIEW)

    OpenAIRE

    Mr. Mujahid Ahmed Khan Abdul Sayeed Khan*1, Prof. A .G. Bhuibhar2 & Prof. P. P. Pande3

    2018-01-01

    The availability of drinking water is reducing day by day; where as the requirement of drinking water is increasing rapidly. To overcome this problem there is a need for some sustainable source for the water distillation (purification). Solar still is a useful device that can be used for the distilling of brackish water for the drinking purposes Solar still is a simple way of distilling water using the heat of the sun. The performance of stepped type solar still with internal and external ref...

  1. The effect of step height on the performance of three-dimensional ac electro-osmotic microfluidic pumps.

    Science.gov (United States)

    Urbanski, John Paul; Levitan, Jeremy A; Burch, Damian N; Thorsen, Todd; Bazant, Martin Z

    2007-05-15

    Recent numerical and experimental studies have investigated the increase in efficiency of microfluidic ac electro-osmotic pumps by introducing nonplanar geometries with raised steps on the electrodes. In this study, we analyze the effect of the step height on ac electro-osmotic pump performance. AC electro-osmotic pumps with three-dimensional electroplated steps are fabricated on glass substrates and pumping velocities of low ionic strength electrolyte solutions are measured systematically using a custom microfluidic device. Numerical simulations predict an improvement in pump performance with increasing step height, at a given frequency and voltage, up to an optimal step height, which qualitatively matches the trend observed in experiment. For a broad range of step heights near the optimum, the observed flow is much faster than with existing planar pumps (at the same voltage and minimum feature size) and in the theoretically predicted direction of the "fluid conveyor belt" mechanism. For small step heights, the experiments also exhibit significant flow reversal at the optimal frequency, which cannot be explained by the theory, although the simulations predict weak flow reversal at higher frequencies due to incomplete charging. These results provide insight to an important parameter for the design of nonplanar electro-osmotic pumps and clues to improve the fundamental theory of ACEO.

  2. Optimal placement of biomass fuelled gas turbines for reduced losses

    International Nuclear Information System (INIS)

    Jurado, Francisco; Cano, Antonio

    2006-01-01

    This paper presents a method for the optimal location and sizing of biomass fuelled gas turbine power plants. Both profitability in using biomass and power loss are considered in the cost function. The first step is to assess the plant size that maximizes the profitability of the project. The second step is to determine the optimal location of the gas turbines in the electric system to minimize the power loss of the system

  3. Optimization of an online heart-cutting multidimensional gas chromatography clean-up step for isotopic ratio mass spectrometry and simultaneous quadrupole mass spectrometry measurements of endogenous anabolic steroid in urine.

    Science.gov (United States)

    Casilli, Alessandro; Piper, Thomas; de Oliveira, Fábio Azamor; Padilha, Monica Costa; Pereira, Henrique Marcelo; Thevis, Mario; de Aquino Neto, Francisco Radler

    2016-11-01

    Measuring carbon isotope ratios (CIRs) of urinary analytes represents a cornerstone of doping control analysis and has been particularly optimized for the detection of the misuse of endogenous steroids. Isotope ratio mass spectrometry (IRMS) of appropriate quality, however, necessitates adequate purities of the investigated steroids, which requires extensive pre-analytical sample clean-up steps due to both the natural presence of the target analytes and the high complexity of the matrix. In order to accelerate the sample preparation and increase the automation of the process, the use of multidimensional gas chromatography (MDGC) prior to IRMS experiments, was investigated. A well-established instrumental configuration based on two independent GC ovens and one heart-cutting device was optimized. The first dimension (1D) separation was obtained by a non-polar column which assured high efficiency and good loading capacity, while the second dimension (2D), based on a mid-polar stationary phase, provided good selectivity. A flame ionization detector monitored the 1D, and the 2D was simultaneously recorded by isotope ratio and quadrupole mass spectrometry. The assembled MDGC set-up was applied for measuring testosterone, 5α- and 5β-androstanediol, androsterone, and etiocholanolone as target compounds and pregnanediol as endogenous reference compound. The urine sample were pretreated by conventional sample preparation steps comprising solid-phase extraction, hydrolysis, and liquid-liquid extraction. The extract obtained was acetylated and different aliquots were injected into the MDGC system. Two high performance liquid chromatography steps, conventionally adopted prior to CIR measurements, were replaced by the MDGC approach. The obtained values were consistent with the conventional ones. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Accurate step-hold tracking of smoothly varying periodic and aperiodic probability.

    Science.gov (United States)

    Ricci, Matthew; Gallistel, Randy

    2017-07-01

    Subjects observing many samples from a Bernoulli distribution are able to perceive an estimate of the generating parameter. A question of fundamental importance is how the current percept-what we think the probability now is-depends on the sequence of observed samples. Answers to this question are strongly constrained by the manner in which the current percept changes in response to changes in the hidden parameter. Subjects do not update their percept trial-by-trial when the hidden probability undergoes unpredictable and unsignaled step changes; instead, they update it only intermittently in a step-hold pattern. It could be that the step-hold pattern is not essential to the perception of probability and is only an artifact of step changes in the hidden parameter. However, we now report that the step-hold pattern obtains even when the parameter varies slowly and smoothly. It obtains even when the smooth variation is periodic (sinusoidal) and perceived as such. We elaborate on a previously published theory that accounts for: (i) the quantitative properties of the step-hold update pattern; (ii) subjects' quick and accurate reporting of changes; (iii) subjects' second thoughts about previously reported changes; (iv) subjects' detection of higher-order structure in patterns of change. We also call attention to the challenges these results pose for trial-by-trial updating theories.

  5. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    Science.gov (United States)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  6. Echinocandin Failure Case Due to a Previously Unreported FKS1 Mutation in Candida krusei

    DEFF Research Database (Denmark)

    Jensen, Rasmus Hare; Justesen, Ulrik Stenz; Rewes, Annika

    2014-01-01

    Echinocandins are the preferred therapy for invasive infections due to Candida krusei. We present here a case of clinical failure involving C. krusei with a characteristic FKS1 hot spot mutation not previously reported in C. krusei that was isolated after 14 days of treatment. Anidulafungin MICs...... were elevated by ≥5 dilution steps above the clinical breakpoint but by only 1 step for a Candida albicans isolate harboring the corresponding mutation, suggesting a notable species-specific difference in the MIC increase conferred by this mutation....

  7. Lateral step initiation behavior in older adults.

    Science.gov (United States)

    Sparto, Patrick J; Jennings, J Richard; Furman, Joseph M; Redfern, Mark S

    2014-02-01

    Older adults have varied postural responses during induced and voluntary lateral stepping. The purpose of the research was to quantify the occurrence of different stepping strategies during lateral step initiation in older adults and to relate the stepping responses to retrospective history of falls. Seventy community-ambulating older adults (mean age 76 y, range 70-94 y) performed voluntary lateral steps as quickly as possible to the right or left in response to a visual cue, in a blocked design. Vertical ground reaction forces were measured using a forceplate, and the number and latency of postural adjustments were quantified. Subjects were assigned to groups based on their stepping strategy. The frequency of trials with one or two postural adjustments was compared with data from 20 younger adults (mean age 38 y, range 21-58 y). Logistic regression was used to relate presence of a fall in the previous year with the number and latency of postural adjustments. In comparison with younger adults, who almost always demonstrated one postural adjustment when stepping laterally, older adults constituted a continuous distribution in the percentage of step trials made with one postural adjustment (from 0% to 100% of trials). Latencies of the initial postural adjustment and foot liftoff varied depending on the number of postural adjustments made. A history of falls was associated a larger percentage of two postural adjustments, and a longer latency of foot liftoff. In conclusion, the number and latency of postural adjustments made during voluntary lateral stepping provides additional evidence that lateral control of posture may be a critical indicator of aging. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Optimization and quality control of genome-wide Hi-C library preparation.

    Science.gov (United States)

    Zhang, Xiang-Yuan; He, Chao; Ye, Bing-Yu; Xie, De-Jian; Shi, Ming-Lei; Zhang, Yan; Shen, Wen-Long; Li, Ping; Zhao, Zhi-Hu

    2017-09-20

    Highest-throughput chromosome conformation capture (Hi-C) is one of the key assays for genome- wide chromatin interaction studies. It is a time-consuming process that involves many steps and many different kinds of reagents, consumables, and equipments. At present, the reproducibility is unsatisfactory. By optimizing the key steps of the Hi-C experiment, such as crosslinking, pretreatment of digestion, inactivation of restriction enzyme, and in situ ligation etc., we established a robust Hi-C procedure and prepared two biological replicates of Hi-C libraries from the GM12878 cells. After preliminary quality control by Sanger sequencing, the two replicates were high-throughput sequenced. The bioinformatics analysis of the raw sequencing data revealed the mapping-ability and pair-mate rate of the raw data were around 90% and 72%, respectively. Additionally, after removal of self-circular ligations and dangling-end products, more than 96% of the valid pairs were reached. Genome-wide interactome profiling shows clear topological associated domains (TADs), which is consistent with previous reports. Further correlation analysis showed that the two biological replicates strongly correlate with each other in terms of both bin coverage and all bin pairs. All these results indicated that the optimized Hi-C procedure is robust and stable, which will be very helpful for the wide applications of the Hi-C assay.

  9. Optimal design of an alignment-free two-DOF rehabilitation robot for the shoulder complex.

    Science.gov (United States)

    Galinski, Daniel; Sapin, Julien; Dehez, Bruno

    2013-06-01

    This paper presents the optimal design of an alignment-free exoskeleton for the rehabilitation of the shoulder complex. This robot structure is constituted of two actuated joints and is linked to the arm through passive degrees of freedom (DOFs) to drive the flexion-extension and abduction-adduction movements of the upper arm. The optimal design of this structure is performed through two steps. The first step is a multi-objective optimization process aiming to find the best parameters characterizing the robot and its position relative to the patient. The second step is a comparison process aiming to select the best solution from the optimization results on the basis of several criteria related to practical considerations. The optimal design process leads to a solution outperforming an existing solution on aspects as kinematics or ergonomics while being more simple.

  10. Optimal social insurance with linear income taxation

    DEFF Research Database (Denmark)

    Bovenberg, Lans; Sørensen, Peter Birch

    2009-01-01

    We study optimal social insurance aimed at insuring disability risk in the presence of linear income taxation. Optimal disability insurance benefits rise with previous earnings. Optimal insurance is incomplete even though disability risks are exogenous and verifiable so that moral hazard in disab...... in disability insurance is absent. Imperfect insurance is optimal because it encourages workers to insure themselves against disability by working and saving more, thereby alleviating the distortionary impact of the redistributive income tax on labor supply and savings.......We study optimal social insurance aimed at insuring disability risk in the presence of linear income taxation. Optimal disability insurance benefits rise with previous earnings. Optimal insurance is incomplete even though disability risks are exogenous and verifiable so that moral hazard...

  11. Typing DNA profiles from previously enhanced fingerprints using direct PCR.

    Science.gov (United States)

    Templeton, Jennifer E L; Taylor, Duncan; Handt, Oliva; Linacre, Adrian

    2017-07-01

    Fingermarks are a source of human identification both through the ridge patterns and DNA profiling. Typing nuclear STR DNA markers from previously enhanced fingermarks provides an alternative method of utilising the limited fingermark deposit that can be left behind during a criminal act. Dusting with fingerprint powders is a standard method used in classical fingermark enhancement and can affect DNA data. The ability to generate informative DNA profiles from powdered fingerprints using direct PCR swabs was investigated. Direct PCR was used as the opportunity to generate usable DNA profiles after performing any of the standard DNA extraction processes is minimal. Omitting the extraction step will, for many samples, be the key to success if there is limited sample DNA. DNA profiles were generated by direct PCR from 160 fingermarks after treatment with one of the following dactyloscopic fingerprint powders: white hadonite; silver aluminium; HiFi Volcano silk black; or black magnetic fingerprint powder. This was achieved by a combination of an optimised double-swabbing technique and swab media, omission of the extraction step to minimise loss of critical low-template DNA, and additional AmpliTaq Gold ® DNA polymerase to boost the PCR. Ninety eight out of 160 samples (61%) were considered 'up-loadable' to the Australian National Criminal Investigation DNA Database (NCIDD). The method described required a minimum of working steps, equipment and reagents, and was completed within 4h. Direct PCR allows the generation of DNA profiles from enhanced prints without the need to increase PCR cycle numbers beyond manufacturer's recommendations. Particular emphasis was placed on preventing contamination by applying strict protocols and avoiding the use of previously used fingerprint brushes. Based on this extensive survey, the data provided indicate minimal effects of any of these four powders on the chance of obtaining DNA profiles from enhanced fingermarks. Copyright © 2017

  12. Axial blanket enrichment optimization of the NPP Krsko fuel

    International Nuclear Information System (INIS)

    Kromar, M.; Kurincic, B.

    2001-01-01

    In this paper optimal axial blanket enrichment of the NPP Krsko fuel is investigated. Since the optimization is dictated by economic categories that can significantly vary in time, two step approach is applied. In the first step simple relationship between the equivalent change in enrichment of axial blankets and central fuel region is established. The relationship is afterwards processed with economic criteria and constraints to obtain optimal axial blanket enrichment. In the analysis realistic NPP Krsko conditions are considered. Except for the fuel enrichment all other fuel characteristics are the same as in the fuel used in the few most recent cycles. A typical reload cycle after the plant power uprate is examined. Analysis has shown that the current blanket enrichment is close to the optimal. Blanket enrichment reduction results in an approximately 100 000 US$ savings per fuel cycle.(author)

  13. Robust and optimal control a two-port framework approach

    CERN Document Server

    Tsai, Mi-Ching

    2014-01-01

    A Two-port Framework for Robust and Optimal Control introduces an alternative approach to robust and optimal controller synthesis procedures for linear, time-invariant systems, based on the two-port system widespread in electrical engineering. The novel use of the two-port system in this context allows straightforward engineering-oriented solution-finding procedures to be developed, requiring no mathematics beyond linear algebra. A chain-scattering description provides a unified framework for constructing the stabilizing controller set and for synthesizing H2 optimal and H∞ sub-optimal controllers. Simple yet illustrative examples explain each step. A Two-port Framework for Robust and Optimal Control  features: ·         a hands-on, tutorial-style presentation giving the reader the opportunity to repeat the designs presented and easily to modify them for their own programs; ·         an abundance of examples illustrating the most important steps in robust and optimal design; and ·   �...

  14. Effects of upper body parameters on biped walking efficiency studied by dynamic optimization

    Directory of Open Access Journals (Sweden)

    Kang An

    2016-12-01

    Full Text Available Walking efficiency is one of the considerations for designing biped robots. This article uses the dynamic optimization method to study the effects of upper body parameters, including upper body length and mass, on walking efficiency. Two minimal actuations, hip joint torque and push-off impulse, are used in the walking model, and minimal constraints are set in a free search using the dynamic optimization. Results show that there is an optimal solution of upper body length for the efficient walking within a range of walking speed and step length. For short step length, walking with a lighter upper body mass is found to be more efficient and vice versa. It is also found that for higher speed locomotion, the increase of the upper body length and mass can make the walking gait optimal rather than other kind of gaits. In addition, the typical strategy of an optimal walking gait is that just actuating the swing leg at the beginning of the step.

  15. Non-dominated sorting binary differential evolution for the multi-objective optimization of cascading failures protection in complex networks

    International Nuclear Information System (INIS)

    Li, Y.F.; Sansavini, G.; Zio, E.

    2013-01-01

    A number of research works have been devoted to the optimization of protection strategies (e.g. transmission line switch off) of critical infrastructures (e.g. power grids, telecommunication networks, computer networks, etc) to avoid cascading failures. This work aims at improving a previous optimization approach proposed by some of the authors [1], based on the modified binary differential evolution (MBDE) algorithm. The improvements are three-fold: (1) in the optimization problem formulation, we introduce a third objective function to minimize the impacts of the switching off operations onto the existing network topology; (2) in the optimization problem formulation, we use the final results of cascades, rather than only a short horizon of one step cascading, to evaluate the effects of the switching off strategies; (3) in the optimization algorithm, the fast non-dominated sorting mechanisms are incorporated into the MBDE algorithm: a new algorithm, namely non-dominated sorting binary differential evolution algorithm (NSBDE) is then proposed. The numerical application to the topological structure of the 380 kV Italian power transmission network proves the benefits of the improvements.

  16. Spectrum analysis of the reduction degree of two-step reduced graphene oxide (GO) and the polymer/r-GO composites

    Energy Technology Data Exchange (ETDEWEB)

    She, Xilin, E-mail: xlshe@qdu.edu.cn [College of Chemical and Environmental Engineering, Qingdao University, Qingdao 266071 (China); Liu, Tongchao; Wu, Nan [College of Chemical and Environmental Engineering, Qingdao University, Qingdao 266071 (China); Xu, Xijin [School of Physics and Technology, University of Jinan, Jinan 250022 (China); Li, Jianjiang [College of Chemical and Environmental Engineering, Qingdao University, Qingdao 266071 (China); Yang, Dongjiang, E-mail: d.yang@qdu.edu.cn [College of Chemical and Environmental Engineering, Qingdao University, Qingdao 266071 (China); School of Chemistry, Physics and Mechanical Engineering, Science and Engineering Faculty, Queensland University of Technology, GPO Box 2434, Brisbane, Queensland 4001 (Australia); Frost, Ray [School of Chemistry, Physics and Mechanical Engineering, Science and Engineering Faculty, Queensland University of Technology, GPO Box 2434, Brisbane, Queensland 4001 (Australia)

    2013-12-16

    In this paper, the reduction degree of graphene oxide (GO) reduced using chemical reduction and thermal reduction methods was characterized by spectrum analysis. The optimized conditions of reducing GO were determined that the hydrazine hydrate is the best reducing agent and the appropriate thermal reduction temperature is at 240 °C. The obtained GO solution was mixed with polystyrene (PS) solution to prepare PS/r-GO composites by using two-step reduction technique under the optimized conditions. The structure and micro-morphology of GO, r-GO and PS/r-GO composites were characterized by Raman spectroscopy, X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), Scanning electron microscopy (SEM), Transmission electron microscopy (TEM) respectively. It is also observed that the two-step reduction pathway is more effective than one-step reduction for improving the reduction degree of GO. Accordingly, the electric conductivity of PS/r-GO composites prepared by two-step reduction technique is as high as 21.45 S m{sup −1}, which is much higher than that of composites fabricated by one-step reduction method. The spectrum techniques will highlight new opportunities for investigating the reduction degree of GO in polymer composites. - Highlights: • Spectrum analysis on the reduction degree of GO reduced by different methods. • Determine the optimized reduction conditions of GO and polymer/r-GO composites. • The two-step reduction is more effective than one-step reduction.

  17. Spectrum analysis of the reduction degree of two-step reduced graphene oxide (GO) and the polymer/r-GO composites

    International Nuclear Information System (INIS)

    She, Xilin; Liu, Tongchao; Wu, Nan; Xu, Xijin; Li, Jianjiang; Yang, Dongjiang; Frost, Ray

    2013-01-01

    In this paper, the reduction degree of graphene oxide (GO) reduced using chemical reduction and thermal reduction methods was characterized by spectrum analysis. The optimized conditions of reducing GO were determined that the hydrazine hydrate is the best reducing agent and the appropriate thermal reduction temperature is at 240 °C. The obtained GO solution was mixed with polystyrene (PS) solution to prepare PS/r-GO composites by using two-step reduction technique under the optimized conditions. The structure and micro-morphology of GO, r-GO and PS/r-GO composites were characterized by Raman spectroscopy, X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), Scanning electron microscopy (SEM), Transmission electron microscopy (TEM) respectively. It is also observed that the two-step reduction pathway is more effective than one-step reduction for improving the reduction degree of GO. Accordingly, the electric conductivity of PS/r-GO composites prepared by two-step reduction technique is as high as 21.45 S m −1 , which is much higher than that of composites fabricated by one-step reduction method. The spectrum techniques will highlight new opportunities for investigating the reduction degree of GO in polymer composites. - Highlights: • Spectrum analysis on the reduction degree of GO reduced by different methods. • Determine the optimized reduction conditions of GO and polymer/r-GO composites. • The two-step reduction is more effective than one-step reduction

  18. Peyton’s four-step approach: differential effects of single instructional steps on procedural and memory performance – a clarification study

    Directory of Open Access Journals (Sweden)

    Krautter M

    2015-05-01

    free recall test. Conclusion: Our study identified Peyton’s Step 3 as being the most crucial part within Peyton’s four-step approach, contributing significantly more to learning success than the previous steps and reaching beyond the benefit of a mere repetition of skills demonstration. Keywords: Peyton’s four-step approach, skills-lab training, procedural skills 

  19. Hierarchical models and iterative optimization of hybrid systems

    Energy Technology Data Exchange (ETDEWEB)

    Rasina, Irina V. [Ailamazyan Program Systems Institute, Russian Academy of Sciences, Peter One str. 4a, Pereslavl-Zalessky, 152021 (Russian Federation); Baturina, Olga V. [Trapeznikov Control Sciences Institute, Russian Academy of Sciences, Profsoyuznaya str. 65, 117997, Moscow (Russian Federation); Nasatueva, Soelma N. [Buryat State University, Smolina str.24a, Ulan-Ude, 670000 (Russian Federation)

    2016-06-08

    A class of hybrid control systems on the base of two-level discrete-continuous model is considered. The concept of this model was proposed and developed in preceding works as a concretization of the general multi-step system with related optimality conditions. A new iterative optimization procedure for such systems is developed on the base of localization of the global optimality conditions via contraction the control set.

  20. Single-step link of the superdeformed band in 143Eu

    International Nuclear Information System (INIS)

    Atac, A.; Bergstroem, M.H.; Nyberg, J.; Persson, J.; Herskind, B.; Joss, D.T.; Lipoglavsek, M.; Tucek, K.

    1996-01-01

    A discrete γ-ray ransition with an energy of 3360.6 keV deexciting the second lowest SD state in 143 Eu has been discovered. It carries 3.2 % of the full intensity of the band and feeds into a nearly spherical state which is above the I = 35/2 (+) , E x =4947 keV level. The exact placement of the single-step link is, however, not established due to the specially complicated level scheme in the region of interest. The energy of the single-step link agrees well with the previously determined two-step links. (orig.)

  1. Optimal stability polynomials for numerical integration of initial value problems

    KAUST Repository

    Ketcheson, David I.; Ahmadia, Aron

    2013-01-01

    We consider the problem of finding optimally stable polynomial approximations to the exponential for application to one-step integration of initial value ordinary and partial differential equations. The objective is to find the largest stable step

  2. Controller tuning with evolutionary multiobjective optimization a holistic multiobjective optimization design procedure

    CERN Document Server

    Reynoso Meza, Gilberto; Sanchis Saez, Javier; Herrero Durá, Juan Manuel

    2017-01-01

    This book is devoted to Multiobjective Optimization Design (MOOD) procedures for controller tuning applications, by means of Evolutionary Multiobjective Optimization (EMO). It presents developments in tools, procedures and guidelines to facilitate this process, covering the three fundamental steps in the procedure: problem definition, optimization and decision-making. The book is divided into four parts. The first part, Fundamentals, focuses on the necessary theoretical background and provides specific tools for practitioners. The second part, Basics, examines a range of basic examples regarding the MOOD procedure for controller tuning, while the third part, Benchmarking, demonstrates how the MOOD procedure can be employed in several control engineering problems. The fourth part, Applications, is dedicated to implementing the MOOD procedure for controller tuning in real processes.

  3. Long-term pain relief with optimized medical treatment including antioxidants and step-up interventional therapy in patients with chronic pancreatitis.

    Science.gov (United States)

    Shalimar; Midha, Shallu; Hasan, Ajmal; Dhingra, Rajan; Garg, Pramod Kumar

    2017-01-01

    Abdominal pain is difficult to treat in patients with chronic pancreatitis (CP). Medical therapy including antioxidants has been shown to relieve pain of CP in the short-term. Our aim was to study the long-term results of optimized medical and interventional therapy for pain relief in patients with CP with a step-up approach. All consecutive patients with CP were included prospectively in the study. They were treated medically with a well-balanced diet, pancreatic enzymes, and antioxidants (9000 IU beta-carotene, 0.54 g vitamin C, 270 IU vitamin E, 600 µg organic selenium, and 2 g methionine). Endoscopic therapy and/or surgery were offered if medical therapy failed. Pain relief was the primary outcome measure. A total of 313 patients (mean age 26.16 ± 12.17; 244 males) with CP were included; 288 (92%) patients had abdominal pain. The etiology of CP was idiopathic in 224 (71.6%) and alcohol in 82 (26.2%). At 1-year follow-up, significant pain relief was achieved in 84.7% of patients: 52.1% with medical therapy, 16.7% with endoscopic therapy, 7.6% with surgery, and 8.3% spontaneously. The mean pain score decreased from 6.36 ± 1.92 to 1.62 ± 2.10 (P pain free at those follow-up periods. Significant pain relief is achieved in the majority of patients with optimized medical and interventional treatment. © 2016 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.

  4. Optimizing queries in distributed systems

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2006-01-01

    Full Text Available This research presents the main elements of query optimizations in distributed systems. First, data architecture according with system level architecture in a distributed environment is presented. Then the architecture of a distributed database management system (DDBMS is described on conceptual level followed by the presentation of the distributed query execution steps on these information systems. The research ends with presentation of some aspects of distributed database query optimization and strategies used for that.

  5. The stepping behavior analysis of pedestrians from different age groups via a single-file experiment

    Science.gov (United States)

    Cao, Shuchao; Zhang, Jun; Song, Weiguo; Shi, Chang'an; Zhang, Ruifang

    2018-03-01

    The stepping behavior of pedestrians with different age compositions in single-file experiment is investigated in this paper. The relation between step length, step width and stepping time are analyzed by using the step measurement method based on the calculation of curvature of the trajectory. The relations of velocity-step width, velocity-step length and velocity-stepping time for different age groups are discussed and compared with previous studies. Finally effects of pedestrian gender and height on stepping laws and fundamental diagrams are analyzed. The study is helpful for understanding pedestrian dynamics of movement. Meanwhile, it offers experimental data to develop a microscopic model of pedestrian movement by considering stepping behavior.

  6. Optimization of control bars patterns and fuel recharges of coupled form

    International Nuclear Information System (INIS)

    Mejia S, D.M.; Ortiz S, J.J.

    2006-01-01

    In this work a system coupled for the optimization of fuel recharges and control bars patterns in boiling water reactors (BWR by its initials in English) is presented. It was used a multi state recurrent neural net like optimization technique. This type of neural net has been used in the solution of diverse problems, in particular the design of patterns of control bars and the design of the fuel recharge. However, these problems have been resolved in an independent way with different optimization techniques. The system was developed in FORTRAN 77 language, it calls OCORN (Optimization of Cycles of Operation using Neural Nets) and it solves both problems of combinatory optimization in a coupled way. OCORN begins creating a seed recharge by means of an optimization through the Haling principle. Later on a pattern of control bars for this recharge seed is proposed. Then a new fuel recharge is designed using the control bars patterns previously found. By this way an iterative process begins among the optimization of control bars patterns and the fuel recharge until a stop criteria it is completed. The stop criteria is completed when the fuel recharges and the control bars patterns don't vary in several successive iterations. The final result is an optimal fuel recharge and its respective control bars pattern. In this work the obtained results by this system for a cycle of balance of 18 months divided in 12 steps of burnt are presented. The obtained results are very encouraging, since the fuel recharge and the control bars pattern, its fulfill with the restrictions imposed in each one of the problems. (Author)

  7. Stepping strategies for regulating gait adaptability and stability.

    Science.gov (United States)

    Hak, Laura; Houdijk, Han; Steenbrink, Frans; Mert, Agali; van der Wurff, Peter; Beek, Peter J; van Dieën, Jaap H

    2013-03-15

    Besides a stable gait pattern, gait in daily life requires the capability to adapt this pattern in response to environmental conditions. The purpose of this study was to elucidate the anticipatory strategies used by able-bodied people to attain an adaptive gait pattern, and how these strategies interact with strategies used to maintain gait stability. Ten healthy subjects walked in a Computer Assisted Rehabilitation ENvironment (CAREN). To provoke an adaptive gait pattern, subjects had to hit virtual targets, with markers guided by their knees, while walking on a self-paced treadmill. The effects of walking with and without this task on walking speed, step length, step frequency, step width and the margins of stability (MoS) were assessed. Furthermore, these trials were performed with and without additional continuous ML platform translations. When an adaptive gait pattern was required, subjects decreased step length (padaptations resulted in the preservation of equal MoS between trials, despite the disturbing influence of the gait adaptability task. When the gait adaptability task was combined with the balance perturbation subjects further decreased step length, as evidenced by a significant interaction between both manipulations (p=0.012). In conclusion, able-bodied people reduce step length and increase step width during walking conditions requiring a high level of both stability and adaptability. Although an increase in step frequency has previously been found to enhance stability, a faster movement, which would coincide with a higher step frequency, hampers accuracy and may consequently limit gait adaptability. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Optimized Dose Distribution of Gammamed Plus Vaginal Cylinders

    International Nuclear Information System (INIS)

    Supe, Sanjay S.; Bijina, T.K.; Varatharaj, C.; Shwetha, B.; Arunkumar, T.; Sathiyan, S.; Ganesh, K.M.; Ravikumar, M.

    2009-01-01

    Endometrial carcinoma is the most common malignancy arising in the female genital tract. Intracavitary vaginal cuff irradiation may be given alone or with external beam irradiation in patients determined to be at risk for locoregional recurrence. Vaginal cylinders are often used to deliver a brachytherapy dose to the vaginal apex and upper vagina or the entire vaginal surface in the management of postoperative endometrial cancer or cervical cancer. The dose distributions of HDR vaginal cylinders must be evaluated carefully, so that clinical experiences with LDR techniques can be used in guiding optimal use of HDR techniques. The aim of this study was to optimize dose distribution for Gammamed plus vaginal cylinders. Placement of dose optimization points was evaluated for its effect on optimized dose distributions. Two different dose optimization point models were used in this study, namely non-apex (dose optimization points only on periphery of cylinder) and apex (dose optimization points on periphery and along the curvature including the apex points). Thirteen dwell positions were used for the HDR dosimetry to obtain a 6-cm active length. Thus 13 optimization points were available at the periphery of the cylinder. The coordinates of the points along the curvature depended on the cylinder diameters and were chosen for each cylinder so that four points were distributed evenly in the curvature portion of the cylinder. Diameter of vaginal cylinders varied from 2.0 to 4.0 cm. Iterative optimization routine was utilized for all optimizations. The effects of various optimization routines (iterative, geometric, equal times) was studied for the 3.0-cm diameter vaginal cylinder. The effect of source travel step size on the optimized dose distributions for vaginal cylinders was also evaluated. All optimizations in this study were carried for dose of 6 Gy at dose optimization points. For both non-apex and apex models of vaginal cylinders, doses for apex point and three dome

  9. Supply chain optimization of sugarcane first generation and eucalyptus second generation ethanol production in Brazil

    International Nuclear Information System (INIS)

    Jonker, J.G.G.; Junginger, H.M.; Verstegen, J.A.; Lin, T.; Rodríguez, L.F.; Ting, K.C.; Faaij, A.P.C.; Hilst, F. van der

    2016-01-01

    Highlights: • Optimal location & scale of ethanol plants for expansion in Goiás until 2030. • Ethanol costs from sugarcane vary between 710 and 752 US$/m"3 in 2030. • For eucalyptus-based ethanol production costs vary between 543 and 560 US$/m"3 in 2030. • System-wide optimization has a marginal impact on overall production costs. • The overall GHG emission intensity is mainly impacted by former land use. - Abstract: The expansion of the ethanol industry in Brazil faces two important challenges: to reduce total ethanol production costs and to limit the greenhouse gas (GHG) emission intensity of the ethanol produced. The objective of this study is to economically optimize the scale and location of ethanol production plants given the expected expansion of biomass supply regions. A linear optimization model is utilized to determine the optimal location and scale of sugarcane and eucalyptus industrial processing plants given the projected spatial distribution of the expansion of biomass production in the state of Goiás between 2012 and 2030. Three expansion approaches evaluated the impact on ethanol production costs of expanding an existing industry in one time step (one-step), or multiple time steps (multi-step), or constructing a newly emerging ethanol industry in Goiás (greenfield). In addition, the GHG emission intensity of the optimized ethanol supply chains are calculated. Under the three expansion approaches, the total ethanol production costs of sugarcane ethanol decrease from 894 US$/m"3 ethanol in 2015 to 752, 715, and 710 US$/m"3 ethanol in 2030 for the multi-step, one step and greenfield expansion respectively. For eucalyptus, ethanol production costs decrease from 635 US$/m"3 in 2015 to 560 and 543 US$/m"3 in 2030 for the multi-step and one-step approach. A general trend is the use of large scale industrial processing plants, especially towards 2030 due to increased biomass supply. We conclude that a system-wide optimization as a marginal

  10. Comparing Multi-Step IMAC and Multi-Step TiO2 Methods for Phosphopeptide Enrichment

    Science.gov (United States)

    Yue, Xiaoshan; Schunter, Alissa; Hummon, Amanda B.

    2016-01-01

    Phosphopeptide enrichment from complicated peptide mixtures is an essential step for mass spectrometry-based phosphoproteomic studies to reduce sample complexity and ionization suppression effects. Typical methods for enriching phosphopeptides include immobilized metal affinity chromatography (IMAC) or titanium dioxide (TiO2) beads, which have selective affinity and can interact with phosphopeptides. In this study, the IMAC enrichment method was compared with the TiO2 enrichment method, using a multi-step enrichment strategy from whole cell lysate, to evaluate their abilities to enrich for different types of phosphopeptides. The peptide-to-beads ratios were optimized for both IMAC and TiO2 beads. Both IMAC and TiO2 enrichments were performed for three rounds to enable the maximum extraction of phosphopeptides from the whole cell lysates. The phosphopeptides that are unique to IMAC enrichment, unique to TiO2 enrichment, and identified with both IMAC and TiO2 enrichment were analyzed for their characteristics. Both IMAC and TiO2 enriched similar amounts of phosphopeptides with comparable enrichment efficiency. However, phosphopeptides that are unique to IMAC enrichment showed a higher percentage of multi-phosphopeptides, as well as a higher percentage of longer, basic, and hydrophilic phosphopeptides. Also, the IMAC and TiO2 procedures clearly enriched phosphopeptides with different motifs. Finally, further enriching with two rounds of TiO2 from the supernatant after IMAC enrichment, or further enriching with two rounds of IMAC from the supernatant TiO2 enrichment does not fully recover the phosphopeptides that are not identified with the corresponding multi-step enrichment. PMID:26237447

  11. STEP: Self-supporting tailored k-space estimation for parallel imaging reconstruction.

    Science.gov (United States)

    Zhou, Zechen; Wang, Jinnan; Balu, Niranjan; Li, Rui; Yuan, Chun

    2016-02-01

    A new subspace-based iterative reconstruction method, termed Self-supporting Tailored k-space Estimation for Parallel imaging reconstruction (STEP), is presented and evaluated in comparison to the existing autocalibrating method SPIRiT and calibrationless method SAKE. In STEP, two tailored schemes including k-space partition and basis selection are proposed to promote spatially variant signal subspace and incorporated into a self-supporting structured low rank model to enforce properties of locality, sparsity, and rank deficiency, which can be formulated into a constrained optimization problem and solved by an iterative algorithm. Simulated and in vivo datasets were used to investigate the performance of STEP in terms of overall image quality and detail structure preservation. The advantage of STEP on image quality is demonstrated by retrospectively undersampled multichannel Cartesian data with various patterns. Compared with SPIRiT and SAKE, STEP can provide more accurate reconstruction images with less residual aliasing artifacts and reduced noise amplification in simulation and in vivo experiments. In addition, STEP has the capability of combining compressed sensing with arbitrary sampling trajectory. Using k-space partition and basis selection can further improve the performance of parallel imaging reconstruction with or without calibration signals. © 2015 Wiley Periodicals, Inc.

  12. Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    S. Radhika

    2016-04-01

    Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.

  13. Galerkin v. discrete-optimal projection in nonlinear model reduction

    Energy Technology Data Exchange (ETDEWEB)

    Carlberg, Kevin Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Barone, Matthew Franklin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Antil, Harbir [George Mason Univ., Fairfax, VA (United States)

    2015-04-01

    Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes. We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.

  14. Time-optimal control of reactor power

    International Nuclear Information System (INIS)

    Bernard, J.A.

    1987-01-01

    Control laws that permit adjustments in reactor power to be made in minimum time and without overshoot have been formulated and demonstrated. These control laws which are derived from the standard and alternate dynamic period equations, are closed-form expressions of general applicability. These laws were deduced by noting that if a system is subject to one or more operating constraints, then the time-optimal response is to move the system along these constraints. Given that nuclear reactors are subject to limitations on the allowed reactor period, a time-optimal control law would step the period from infinity to the minimum allowed value, hold the period at that value for the duration of the transient, and then step the period back to infinity. The change in reactor would therefore be accomplished in minimum time. The resulting control laws are superior to other forms of time-optimal control because they are general-purpose, closed-form expressions that are both mathematically tractable and readily implanted. Moreover, these laws include provisions for the use of feedback. The results of simulation studies and actual experiments on the 5 MWt MIT Research Reactor in which these time-optimal control laws were used successfully to adjust the reactor power are presented

  15. A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet.

    Science.gov (United States)

    Brown, A M

    2001-06-01

    The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.

  16. A new deterministic Ensemble Kalman Filter with one-step-ahead smoothing for storm surge forecasting

    KAUST Repository

    Raboudi, Naila

    2016-11-01

    The Ensemble Kalman Filter (EnKF) is a popular data assimilation method for state-parameter estimation. Following a sequential assimilation strategy, it breaks the problem into alternating cycles of forecast and analysis steps. In the forecast step, the dynamical model is used to integrate a stochastic sample approximating the state analysis distribution (called analysis ensemble) to obtain a forecast ensemble. In the analysis step, the forecast ensemble is updated with the incoming observation using a Kalman-like correction, which is then used for the next forecast step. In realistic large-scale applications, EnKFs are implemented with limited ensembles, and often poorly known model errors statistics, leading to a crude approximation of the forecast covariance. This strongly limits the filter performance. Recently, a new EnKF was proposed in [1] following a one-step-ahead smoothing strategy (EnKF-OSA), which involves an OSA smoothing of the state between two successive analysis. At each time step, EnKF-OSA exploits the observation twice. The incoming observation is first used to smooth the ensemble at the previous time step. The resulting smoothed ensemble is then integrated forward to compute a "pseudo forecast" ensemble, which is again updated with the same observation. The idea of constraining the state with future observations is to add more information in the estimation process in order to mitigate for the sub-optimal character of EnKF-like methods. The second EnKF-OSA "forecast" is computed from the smoothed ensemble and should therefore provide an improved background. In this work, we propose a deterministic variant of the EnKF-OSA, based on the Singular Evolutive Interpolated Ensemble Kalman (SEIK) filter. The motivation behind this is to avoid the observations perturbations of the EnKF in order to improve the scheme\\'s behavior when assimilating big data sets with small ensembles. The new SEIK-OSA scheme is implemented and its efficiency is demonstrated

  17. Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids

    International Nuclear Information System (INIS)

    Chen, Bo; Chen, Chen; Wang, Jianhui; Butler-Purry, Karen L.

    2017-01-01

    Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determined to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.

  18. Efficacy, Beliefs, and Investment in Step-Level Public Goods

    NARCIS (Netherlands)

    Dijkstra, Jacob; Mulders, Jaap Oude

    2014-01-01

    A central concept for understanding social dilemma behavior is the efficacy of an actor's cooperative behavior in terms of increasing group well-being. We report a decision and game theoretical analysis of efficacy in step-level public goods (SPGs). Previous research shows a positive relation

  19. Computational and experimental optimization of the exhaust air energy recovery wind turbine generator

    International Nuclear Information System (INIS)

    Tabatabaeikia, Seyedsaeed; Ghazali, Nik Nazri Bin Nik; Chong, Wen Tong; Shahizare, Behzad; Izadyar, Nima; Esmaeilzadeh, Alireza; Fazlizan, Ahmad

    2016-01-01

    Highlights: • Studying the viability of harvesting wasted energy by exhaust air recovery generator. • Optimizing the design using response surface methodology. • Validation of optimization and computation result by performing experimental tests. • Investigation of flow behaviour using computational fluid dynamic simulations. • Performing the technical and economic study of the exhaust air recovery generator. - Abstract: This paper studies the optimization of an innovative exhaust air recovery wind turbine generator through computational fluid dynamic (CFD) simulations. The optimization strategy aims to optimize the overall system energy generation and simultaneously guarantee that it does not violate the cooling tower performance in terms of decreasing airflow intake and increasing fan motor power consumption. The wind turbine rotor position, modifying diffuser plates, and introducing separator plates to the design are considered as the variable factors for the optimization. The generated power coefficient is selected as optimization objective. Unlike most of previous optimizations in field of wind turbines, in this study, response surface methodology (RSM) as a method of analytical procedures optimization has been utilised by using multivariate statistic techniques. A comprehensive study on CFD parameters including the mesh resolution, the turbulence model and transient time step values is presented. The system is simulated using SST K-ω turbulence model and then both computational and optimization results are validated by experimental data obtained in laboratory. Results show that the optimization strategy can improve the wind turbine generated power by 48.6% compared to baseline design. Meanwhile, it is able to enhance the fan intake airflow rate and decrease fan motor power consumption. The obtained optimization equations are also validated by both CFD and experimental results and a negligible deviation in range of 6–8.5% is observed.

  20. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    Science.gov (United States)

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step

  1. Contribution to the optimal sizing of the hybrid photovoltaic systems

    International Nuclear Information System (INIS)

    Dimitrov, Dimitar

    2009-01-01

    In this thesis, hybrid photovoltaic (HPV) systems are considered, in which the electricity is generated by a photovoltaic generator, and additionally by a diesel genset. Within this, a software tool for optimal sizing and designing was developed, which was used for optimization of HPV systems, aimed for supplying a small rural village. For optimization, genetic algorithms were used, optimizing 10 HPV system parameters (rated power of the components, battery capacity, dispatching strategy parameters etc.). The optimization objective is to size and design systems that continuously supply the load, with the lowest net electricity cost. In order to speed up the optimization process, the most suitable genetic algorithm settings were chosen by an in-depth previous analysis. Using measurements, the characteristics of PV generator working in real conditions were obtained. According to this, input values for the PV generator simulation model were adapted. It is introduced a quasi-steady battery simulation model, which avoid the voltage and state-of-the-charge value variation problems, when constant current charging/discharging, within a time step interval, is used. This model takes into account the influence of the battery temperature to its operational characteristics. There were also introduced simulation model improvements to the other components in the HPV systems. Using long-term measurement records, validity of solar radiation and air temperature data was checked. It was also analyzed the sensitivity of the obtained optimized HPV systems to the variation of the prices of the: components, fuel and economic rates. Based on the values of multi-decade records for more locations in the Balkan region, it was estimated the occurrence probability of the solar radiation values. This was used for analysing the sensitivity of some HPV performances to the expected stochastic variations of the solar radiation values. (Author)

  2. Programming for Sparse Minimax Optimization

    DEFF Research Database (Denmark)

    Jonasson, K.; Madsen, Kaj

    1994-01-01

    We present an algorithm for nonlinear minimax optimization which is well suited for large and sparse problems. The method is based on trust regions and sequential linear programming. On each iteration, a linear minimax problem is solved for a basic step. If necessary, this is followed...... by the determination of a minimum norm corrective step based on a first-order Taylor approximation. No Hessian information needs to be stored. Global convergence is proved. This new method has been extensively tested and compared with other methods, including two well known codes for nonlinear programming...

  3. Enhanced capillary electrophoretic screening of Alzheimer based on direct apolipoprotein E genotyping and one-step multiplex PCR.

    Science.gov (United States)

    Woo, Nain; Kim, Su-Kang; Sun, Yucheng; Kang, Seong Ho

    2018-01-01

    Human apolipoprotein E (ApoE) is associated with high cholesterol levels, coronary artery disease, and especially Alzheimer's disease. In this study, we developed an ApoE genotyping and one-step multiplex polymerase chain reaction (PCR) based-capillary electrophoresis (CE) method for the enhanced diagnosis of Alzheimer's. The primer mixture of ApoE genes enabled the performance of direct one-step multiplex PCR from whole blood without DNA purification. The combination of direct ApoE genotyping and one-step multiplex PCR minimized the risk of DNA loss or contamination due to the process of DNA purification. All amplified PCR products with different DNA lengths (112-, 253-, 308-, 444-, and 514-bp DNA) of the ApoE genes were analyzed within 2min by an extended voltage programming (VP)-based CE under the optimal conditions. The extended VP-based CE method was at least 120-180 times faster than conventional slab gel electrophoresis methods In particular, all amplified DNA fragments were detected in less than 10 PCR cycles using a laser-induced fluorescence detector. The detection limits of the ApoE genes were 6.4-62.0pM, which were approximately 100-100,000 times more sensitive than previous Alzheimer's diagnosis methods In addition, the combined one-step multiplex PCR and extended VP-based CE method was also successfully applied to the analysis of ApoE genotypes in Alzheimer's patients and normal samples and confirmed the distribution probability of allele frequencies. This combination of direct one-step multiplex PCR and an extended VP-based CE method should increase the diagnostic reliability of Alzheimer's with high sensitivity and short analysis time even with direct use of whole blood. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Aerial robot intelligent control method based on back-stepping

    Science.gov (United States)

    Zhou, Jian; Xue, Qian

    2018-05-01

    The aerial robot is characterized as strong nonlinearity, high coupling and parameter uncertainty, a self-adaptive back-stepping control method based on neural network is proposed in this paper. The uncertain part of the aerial robot model is compensated online by the neural network of Cerebellum Model Articulation Controller and robust control items are designed to overcome the uncertainty error of the system during online learning. At the same time, particle swarm algorithm is used to optimize and fix parameters so as to improve the dynamic performance, and control law is obtained by the recursion of back-stepping regression. Simulation results show that the designed control law has desired attitude tracking performance and good robustness in case of uncertainties and large errors in the model parameters.

  5. Optimization programs for reactor core fuel loading exhibiting reduced neutron leakage

    International Nuclear Information System (INIS)

    Darilek, P.

    1991-01-01

    The program MAXIM was developed for the optimization of the fuel loading of WWER-440 reactors. It enables the reactor core reactivity to be maximized by modifying the arrangement of the fuel assemblies. The procedure is divided into three steps. The first step includes the passage from the three-dimensional model of the reactor core to the two-dimensional model. In the second step, the solution to the problem is sought assuming that the multiplying properties, or the reactivity in the zones of the core, vary continuously. In the third step, parameters of actual fuel assemblies are inserted in the ''continuous'' solution obtained. Combined with the program PROPAL for a detailed refinement of the loading, the program MAXIM forms a basis for the development of programs for the optimization of fuel loading with burnable poisons. (Z.M.). 16 refs

  6. A framework for simultaneous aerodynamic design optimization in the presence of chaos

    Energy Technology Data Exchange (ETDEWEB)

    Günther, Stefanie, E-mail: stefanie.guenther@scicomp.uni-kl.de [TU Kaiserslautern, Chair for Scientific Computing, Paul-Ehrlich-Straße 34, 67663 Kaiserslautern (Germany); Gauger, Nicolas R. [TU Kaiserslautern, Chair for Scientific Computing, Paul-Ehrlich-Straße 34, 67663 Kaiserslautern (Germany); Wang, Qiqi [Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States)

    2017-01-01

    Integrating existing solvers for unsteady partial differential equations into a simultaneous optimization method is challenging due to the forward-in-time information propagation of classical time-stepping methods. This paper applies the simultaneous single-step one-shot optimization method to a reformulated unsteady constraint that allows for both forward- and backward-in-time information propagation. Especially in the presence of chaotic and turbulent flow, solving the initial value problem simultaneously with the optimization problem often scales poorly with the time domain length. The new formulation relaxes the initial condition and instead solves a least squares problem for the discrete partial differential equations. This enables efficient one-shot optimization that is independent of the time domain length, even in the presence of chaos.

  7. The Theory of Optimal Taxation

    DEFF Research Database (Denmark)

    Sørensen, Peter Birch

    The paper discusses the implications of optimal tax theory for the debates on uniform commodity taxation and neutral capital income taxation. While strong administrative and political economy arguments in favor of uniform and neutral taxation remain, recent advances in optimal tax theory suggest...... that the information needed to implement the differentiated taxation prescribed by optimal tax theory may be easier to obtain than previously believed. The paper also points to the strong similarity between optimal commodity tax rules and the rules for optimal source-based capital income taxation...

  8. The theory of optimal taxation

    DEFF Research Database (Denmark)

    Sørensen, Peter Birch

    2007-01-01

    The paper discusses the implications of optimal tax theory for the debates on uniform commodity taxation and neutral capital income taxation. While strong administrative and political economy arguments in favor of uniform and neutral taxation remain, recent advances in optimal tax theory suggest...... that the information needed to implement the differentiated taxation prescribed by optimal tax theory may be easier to obtain than previously believed. The paper also points to the strong similarity between optimal commodity tax rules and the rules for optimal source-based capital income taxation...

  9. System floorplanning optimization

    KAUST Repository

    Browning, David W.

    2012-12-01

    Notebook and Laptop Original Equipment Manufacturers (OEMs) place great emphasis on creating unique system designs to differentiate themselves in the mobile market. These systems are developed from the \\'outside in\\' with the focus on how the system is perceived by the end-user. As a consequence, very little consideration is given to the interconnections or power of the devices within the system with a mentality of \\'just make it fit\\'. In this paper we discuss the challenges of Notebook system design and the steps by which system floor-planning tools and algorithms can be used to provide an automated method to optimize this process to ensure all required components most optimally fit inside the Notebook system. © 2012 IEEE.

  10. System floorplanning optimization

    KAUST Repository

    Browning, David W.

    2013-01-10

    Notebook and Laptop Original Equipment Manufacturers (OEMs) place great emphasis on creating unique system designs to differentiate themselves in the mobile market. These systems are developed from the \\'outside in\\' with the focus on how the system is perceived by the end-user. As a consequence, very little consideration is given to the interconnections or power of the devices within the system with a mentality of \\'just make it fit\\'. In this paper we discuss the challenges of Notebook system design and the steps by which system floor-planning tools and algorithms can be used to provide an automated method to optimize this process to ensure all required components most optimally fit inside the Notebook system.

  11. Efficacy, Beliefs, and Investment in Step-Level Public Goods

    NARCIS (Netherlands)

    Dijkstra, J.; Oude Mulders, J.

    2014-01-01

    behavior in terms of increasing group well-being. We report a decision and game theoretical analysis of efficacy in step-level public goods (SPGs). Previous research shows a positive relation between efficacy and contributions to SPGs and explains this relation by a purely motivational account. We

  12. Well Field Management Using Multi-Objective Optimization

    DEFF Research Database (Denmark)

    Hansen, Annette Kirstine; Hendricks Franssen, H. J.; Bauer-Gottwein, Peter

    2013-01-01

    with infiltration basins, injection wells and abstraction wells. The two management objectives are to minimize the amount of water needed for infiltration and to minimize the risk of getting contaminated water into the drinking water wells. The management is subject to a daily demand fulfilment constraint. Two...... different optimization methods are tested. Constant scheduling where decision variables are held constant during the time of optimization, and sequential scheduling where the optimization is performed stepwise for daily time steps. The latter is developed to work in a real-time situation. Case study...

  13. Optimal Subinterval Selection Approach for Power System Transient Stability Simulation

    Directory of Open Access Journals (Sweden)

    Soobae Kim

    2015-10-01

    Full Text Available Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. The performance of the proposed method is demonstrated with the GSO 37-bus system.

  14. Optimal Padding for the Two-Dimensional Fast Fourier Transform

    Science.gov (United States)

    Dean, Bruce H.; Aronstein, David L.; Smith, Jeffrey S.

    2011-01-01

    One-dimensional Fast Fourier Transform (FFT) operations work fastest on grids whose size is divisible by a power of two. Because of this, padding grids (that are not already sized to a power of two) so that their size is the next highest power of two can speed up operations. While this works well for one-dimensional grids, it does not work well for two-dimensional grids. For a two-dimensional grid, there are certain pad sizes that work better than others. Therefore, the need exists to generalize a strategy for determining optimal pad sizes. There are three steps in the FFT algorithm. The first is to perform a one-dimensional transform on each row in the grid. The second step is to transpose the resulting matrix. The third step is to perform a one-dimensional transform on each row in the resulting grid. Steps one and three both benefit from padding the row to the next highest power of two, but the second step needs a novel approach. An algorithm was developed that struck a balance between optimizing the grid pad size with prime factors that are small (which are optimal for one-dimensional operations), and with prime factors that are large (which are optimal for two-dimensional operations). This algorithm optimizes based on average run times, and is not fine-tuned for any specific application. It increases the amount of times that processor-requested data is found in the set-associative processor cache. Cache retrievals are 4-10 times faster than conventional memory retrievals. The tested implementation of the algorithm resulted in faster execution times on all platforms tested, but with varying sized grids. This is because various computer architectures process commands differently. The test grid was 512 512. Using a 540 540 grid on a Pentium V processor, the code ran 30 percent faster. On a PowerPC, a 256x256 grid worked best. A Core2Duo computer preferred either a 1040x1040 (15 percent faster) or a 1008x1008 (30 percent faster) grid. There are many industries that

  15. Finding optimal exact reducts

    KAUST Repository

    AbouEisha, Hassan M.

    2014-01-01

    The problem of attribute reduction is an important problem related to feature selection and knowledge discovery. The problem of finding reducts with minimum cardinality is NP-hard. This paper suggests a new algorithm for finding exact reducts with minimum cardinality. This algorithm transforms the initial table to a decision table of a special kind, apply a set of simplification steps to this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. I present results of computer experiments for a collection of decision tables from UCIML Repository. For many of the experimented tables, the simplification steps solved the problem.

  16. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    International Nuclear Information System (INIS)

    Tian, Zhen; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B.; Peng, Fei

    2015-01-01

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is

  17. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun; Jia, Xun, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Jiang, Steve B., E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu [Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas 75390 (United States); Peng, Fei [Computer Science Department, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213 (United States)

    2015-06-15

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is

  18. Optimization with Extremal Dynamics

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Percus, Allon G.

    2001-01-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard discrete optimization problems. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. Extremal optimization successively updates extremely undesirable variables of a single suboptimal solution, assigning them new, random values. Large fluctuations ensue, efficiently exploring many local optima. We use extremal optimization to elucidate the phase transition in the 3-coloring problem, and we provide independent confirmation of previously reported extrapolations for the ground-state energy of ±J spin glasses in d=3 and 4

  19. Direct-aperture optimization applied to selection of beam orientations in intensity-modulated radiation therapy

    International Nuclear Information System (INIS)

    Bedford, J L; Webb, S

    2007-01-01

    Direct-aperture optimization (DAO) was applied to iterative beam-orientation selection in intensity-modulated radiation therapy (IMRT), so as to ensure a realistic segmental treatment plan at each iteration. Nested optimization engines dealt separately with gantry angles, couch angles, collimator angles, segment shapes, segment weights and wedge angles. Each optimization engine performed a random search with successively narrowing step sizes. For optimization of segment shapes, the filtered backprojection (FBP) method was first used to determine desired fluence, the fluence map was segmented, and then constrained direct-aperture optimization was used thereafter. Segment shapes were fully optimized when a beam angle was perturbed, and minimally re-optimized otherwise. The algorithm was compared with a previously reported method using FBP alone at each orientation iteration. An example case consisting of a cylindrical phantom with a hemi-annular planning target volume (PTV) showed that for three-field plans, the method performed better than when using FBP alone, but for five or more fields, neither method provided much benefit over equally spaced beams. For a prostate case, improved bladder sparing was achieved through the use of the new algorithm. A plan for partial scalp treatment showed slightly improved PTV coverage and lower irradiated volume of brain with the new method compared to FBP alone. It is concluded that, although the method is computationally intensive and not suitable for searching large unconstrained regions of beam space, it can be used effectively in conjunction with prior class solutions to provide individually optimized IMRT treatment plans

  20. Dynamic optimization and robust explicit model predictive control of hydrogen storage tank

    KAUST Repository

    Panos, C.

    2010-09-01

    We present a general framework for the optimal design and control of a metal-hydride bed under hydrogen desorption operation. The framework features: (i) a detailed two-dimension dynamic process model, (ii) a design and operational dynamic optimization step, and (iii) an explicit/multi-parametric model predictive controller design step. For the controller design, a reduced order approximate model is obtained, based on which nominal and robust multi-parametric controllers are designed. © 2010 Elsevier Ltd.

  1. Dynamic optimization and robust explicit model predictive control of hydrogen storage tank

    KAUST Repository

    Panos, C.; Kouramas, K.I.; Georgiadis, M.C.; Pistikopoulos, E.N.

    2010-01-01

    We present a general framework for the optimal design and control of a metal-hydride bed under hydrogen desorption operation. The framework features: (i) a detailed two-dimension dynamic process model, (ii) a design and operational dynamic optimization step, and (iii) an explicit/multi-parametric model predictive controller design step. For the controller design, a reduced order approximate model is obtained, based on which nominal and robust multi-parametric controllers are designed. © 2010 Elsevier Ltd.

  2. Optimization of high free fatty acid reduction in mixed crude palm oils using circulation process through static mixer reactor and pilot-scale of two-step process

    International Nuclear Information System (INIS)

    Somnuk, Krit; Niseng, Suhdee; Prateepchaikul, Gumpon

    2014-01-01

    Highlights: • Reducing FFA in MCPO was circulated through static mixer alone in the lab-scale. • Methanol and sulfuric acid were varied in the esterification reaction. • RSM was employed to optimize the acid-catalyzed esterification in lab-scale. • 60 L of pilot-scale was designed on the basis of a simple operation and maintenance. - Abstract: High free fatty acid (FFA) reduction in mixed crude palm oil (MCPO) was performed with methanol (MeOH) and sulfuric acid (H 2 SO 4 ) as acid catalyst using the circulation process through static mixer reactor. In this study, the response surface methodology (RSM) was adopted to optimize the acid value in esterified oil after esterification process (first-step) in lab-scale. The results showed that acid value was reduced from 30 mgKOH g −1 to 2 mgKOH g −1 , when 19.8 vol.% MeOH, 2.0 vol.% H 2 SO 4 , reaction temperature 60 °C, 40 L h −1 of MCPO, 50 min reaction time, and 5-m of static mixer in length, were used in the lab-scale. This recommended condition was used to develop the pilot-scale process in which the scaling up of the FFA reduction from 5 L MCPO of lab-scale to 60 L MCPO of pilot-scale, which was designed on the basis of a simple operation and maintenance. In the pilot-scale process, the lower 1 mgKOH g −1 of acid value was achieved when it was conducted at the reaction time of 50 min. In the base-catalyzed transesterification (second-step) of pilot-scale process, the 98.65 wt.% of methyl ester purity was achieved when the following condition: 20 vol.% MeOH, 8 gKOH L −1 oil, and 60 min reaction time at 60 °C, was used to produce biodiesel

  3. Multigrid technique and Optimized Schwarz method on block-structured grids with discontinuous interfaces

    DEFF Research Database (Denmark)

    Kolmogorov, Dmitry; Sørensen, Niels N.; Shen, Wen Zhong

    2013-01-01

    An Optimized Schwarz method using Robin boundary conditions for relaxation scheme is presented in the frame of Multigrid method on discontinuous grids. At each iteration the relaxation scheme is performed in two steps: one step with Dirichlet and another step with Robin boundary conditions at inn...

  4. Disturbance by optimal discrimination

    Science.gov (United States)

    Kawakubo, Ryûitirô; Koike, Tatsuhiko

    2018-03-01

    We discuss the disturbance by measurements which unambiguously discriminate between given candidate states. We prove that such an optimal measurement necessarily changes distinguishable states indistinguishable when the inconclusive outcome is obtained. The result was previously shown by Chefles [Phys. Lett. A 239, 339 (1998), 10.1016/S0375-9601(98)00064-4] under restrictions on the class of quantum measurements and on the definition of optimality. Our theorems remove these restrictions and are also applicable to infinitely many candidate states. Combining with our previous results, one can obtain concrete mathematical conditions for the resulting states. The method may have a wide variety of applications in contexts other than state discrimination.

  5. Iterative regularization in intensity-modulated radiation therapy optimization

    International Nuclear Information System (INIS)

    Carlsson, Fredrik; Forsgren, Anders

    2006-01-01

    A common way to solve intensity-modulated radiation therapy (IMRT) optimization problems is to use a beamlet-based approach. The approach is usually employed in a three-step manner: first a beamlet-weight optimization problem is solved, then the fluence profiles are converted into step-and-shoot segments, and finally postoptimization of the segment weights is performed. A drawback of beamlet-based approaches is that beamlet-weight optimization problems are ill-conditioned and have to be regularized in order to produce smooth fluence profiles that are suitable for conversion. The purpose of this paper is twofold: first, to explain the suitability of solving beamlet-based IMRT problems by a BFGS quasi-Newton sequential quadratic programming method with diagonal initial Hessian estimate, and second, to empirically show that beamlet-weight optimization problems should be solved in relatively few iterations when using this optimization method. The explanation of the suitability is based on viewing the optimization method as an iterative regularization method. In iterative regularization, the optimization problem is solved approximately by iterating long enough to obtain a solution close to the optimal one, but terminating before too much noise occurs. Iterative regularization requires an optimization method that initially proceeds in smooth directions and makes rapid initial progress. Solving ten beamlet-based IMRT problems with dose-volume objectives and bounds on the beamlet-weights, we find that the considered optimization method fulfills the requirements for performing iterative regularization. After segment-weight optimization, the treatments obtained using 35 beamlet-weight iterations outperform the treatments obtained using 100 beamlet-weight iterations, both in terms of objective value and of target uniformity. We conclude that iterating too long may in fact deteriorate the quality of the deliverable plan

  6. 3D Model Optimization of Four-Facet Drill for 3D Drilling Simulation

    Directory of Open Access Journals (Sweden)

    Buranský Ivan

    2016-09-01

    Full Text Available The article is focused on optimization of four-facet drill for 3D drilling numerical modelling. For optimization, the process of reverse engineering by PowerShape software was used. The design of four-facet drill was created in NumrotoPlus software. The modified 3D model of the drill was used in the numerical analysis of cutting forces. Verification of the accuracy of 3D models for reverse engineering was implemented using the colour deviation maps. The CAD model was in the STEP format. For simulation software, 3D model in the STEP format is ideal. STEP is a solid model. Simulation software automatically splits the 3D model into finite elements. The STEP model was therefore more suitable than the STL model.

  7. Aspects of the Optimization on the In-Service Inspection

    International Nuclear Information System (INIS)

    Korosec, D.; Vojvodic Tuma, J.

    2002-01-01

    In the present paper, the aspects of optimizing In-Service Inspection (ISI) is discussed. Slovenian Nuclear Safety Administration (SNSA) and its authorized organization for the ISI activities, Institute of Metals and Technologies, are actually permanently involved in the ISI processes of the nuclear power plant (NPP) Krsko. Based on the previous experience on the ISI activities, evaluation of the results and review of the new ISI program, the decision was made to improve recent regulatory and professional practice. That means, the conclusion was made to optimize the evaluation process of the ISI as a process. Traditional criteria, standards and practice gives good fundament for the improvements implementation. Improvements can be done on the way that the more broad knowledge about safety important components of the systems shall bee added to the basic practice. It is necessary to identify conditions of the safety important components, such as realistic stress and fatigue conditions, material properties changes due aging processes, the temperature cycling effects, existing flaws characterization in the light of the previous detection and equipment technique used, assessment of the measurement accuracy on the results etc. In addition to the above mentioned, risk assessment and evaluation of the whole ISI shall be done. To do this it is necessary to made risk evaluation, based on previous structural element probability assessment. Probabilistic risk assessment is important and one of the most powerful tools in the ISI optimization. Some basic work on the filed of the risk informed methods related to the nuclear safety components has been already done. Based on reference documentation, the most important steps in risk informed ISI are discussed: scope definition, consequence evaluation, failure probability estimation, risk evaluation, non-destructive examination method selection and possibilities of implementation, monitoring and feedback. Recent experience on the ISI

  8. First steps towards geometry optimization for Spectrometer Straw Tracker of SHiP detector

    CERN Document Server

    Solovev, Vladimir

    2017-01-01

    This report contains details of CERN Summer Student project which was performed for SHiP experiment (Search for Hidden Particles). The main aim of the project is optimization of Spectrometer Straw Tracker (SST) geometry implemented in FairSHiP simulation program.

  9. Topology optimized permanent magnet systems

    Science.gov (United States)

    Bjørk, R.; Bahl, C. R. H.; Insinga, A. R.

    2017-09-01

    Topology optimization of permanent magnet systems consisting of permanent magnets, high permeability iron and air is presented. An implementation of topology optimization for magnetostatics is discussed and three examples are considered. The Halbach cylinder is topology optimized with iron and an increase of 15% in magnetic efficiency is shown. A topology optimized structure to concentrate a homogeneous field is shown to increase the magnitude of the field by 111%. Finally, a permanent magnet with alternating high and low field regions is topology optimized and a Λcool figure of merit of 0.472 is reached, which is an increase of 100% compared to a previous optimized design.

  10. Step dynamics and terrace-width distribution on flame-annealed gold films: The effect of step-step interaction

    International Nuclear Information System (INIS)

    Shimoni, Nira; Ayal, Shai; Millo, Oded

    2000-01-01

    Dynamics of atomic steps and the terrace-width distribution within step bunches on flame-annealed gold films are studied using scanning tunneling microscopy. The distribution is narrower than commonly observed for vicinal planes and has a Gaussian shape, indicating a short-range repulsive interaction between the steps, with an apparently large interaction constant. The dynamics of the atomic steps, on the other hand, appear to be influenced, in addition to these short-range interactions, also by a longer-range attraction of steps towards step bunches. Both types of interactions promote self-ordering of terrace structures on the surface. When current is driven through the films a step-fingering instability sets in, reminiscent of the Bales-Zangwill instability

  11. Traffic safety and step-by-step driving licence for young people

    DEFF Research Database (Denmark)

    Tønning, Charlotte; Agerholm, Niels

    2017-01-01

    presents a review of safety effects from step-by-step driving licence schemes. Most of the investigated schemes consist of a step-by-step driving licence with Step 1) various tests and education, Step 2) a period where driving is only allowed together with an experienced driver and Step 3) driving without...... companion is allowed but with various restrictions and, in some cases, additional driving education and tests. In general, a step-by-step driving licence improves traffic safety even though the young people are permitted to drive a car earlier on. The effects from driving with an experienced driver vary......Young novice car drivers are much more accident-prone than other drivers - up to 10 times that of their parents' generation. A central solution to improve the traffic safety for this group is implementation of a step-by-step driving licence. A number of countries have introduced a step...

  12. Mechanical and histological characterization of the abdominal muscle. A previous step to modelling hernia surgery.

    Science.gov (United States)

    Hernández, B; Peña, E; Pascual, G; Rodríguez, M; Calvo, B; Doblaré, M; Bellón, J M

    2011-04-01

    The aims of this study are to experimentally characterize the passive elastic behaviour of the rabbit abdominal wall and to develop a mechanical constitutive law which accurately reproduces the obtained experimental results. For this purpose, tissue samples from New Zealand White rabbits 2150±50 (g) were mechanically tested in vitro. Mechanical tests, consisting of uniaxial loading on tissue samples oriented along the craneo-caudal and the perpendicular directions, respectively, revealed the anisotropic non-linear mechanical behaviour of the abdominal tissues. Experiments were performed considering the composite muscle (including external oblique-EO, internal oblique-IO and transverse abdominis-TA muscle layers), as well as separated muscle layers (i.e., external oblique, and the bilayer formed by internal oblique and transverse abdominis). Both the EO muscle layer and the IO-TA bilayer demonstrated a stiffer behaviour along the transversal direction to muscle fibres than along the longitudinal one. The fibre arrangement was measured by means of a histological study which confirmed that collagen fibres are mainly responsible for the passive mechanical strength and stiffness. Furthermore, the degree of anisotropy of the abdominal composite muscle turned out to be less pronounced than those obtained while studying the EO and IO-TA separately. Moreover, a phenomenological constitutive law was used to capture the measured experimental curves. A Levenberg-Marquardt optimization algorithm was used to fit the model constants to reproduce the experimental curves. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Computer program for optimal BWR congtrol rod programming

    International Nuclear Information System (INIS)

    Taner, M.S.; Levine, S.H.; Carmody, J.M.

    1995-01-01

    A fully automated computer program has been developed for designing optimal control rod (CR) patterns for boiling water reactors (BWRs). The new program, called OCTOPUS-3, is based on the OCTOPUS code and employs SIMULATE-3 (Ref. 2) for the analysis. There are three aspects of OCTOPUS-3 that make it successful for use at PECO Energy. It incorporates a new feasibility algorithm that makes the CR design meet all constraints, it has been coupled to a Bourne Shell program 3 to allow the user to run the code interactively without the need for a manual, and it develops a low axial peak to extend the cycle. For PECO Energy Co.'s limericks it increased the energy output by 1 to 2% over the traditional PECO Energy design. The objective of the optimization in OCTOPUS-3 is to approximate a very low axial peaked target power distribution while maintaining criticality, keeping the nodal and assembly peaks below the allowed maximum, and meeting the other constraints. The user-specified input for each exposure point includes: CR groups allowed-to-move, target k eff , and amount of core flow. The OCTOPUS-3 code uses the CR pattern from the previous step as the initial guess unless indicated otherwise

  14. Low-dose computed tomography image restoration using previous normal-dose scan

    International Nuclear Information System (INIS)

    Ma, Jianhua; Huang, Jing; Feng, Qianjin; Zhang, Hua; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2011-01-01

    Purpose: In current computed tomography (CT) examinations, the associated x-ray radiation dose is of a significant concern to patients and operators. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) or kVp parameter (or delivering less x-ray energy to the body) as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and the noise would propagate into the CT image if no adequate noise control is applied during image reconstruction. Since a normal-dose high diagnostic CT image scanned previously may be available in some clinical applications, such as CT perfusion imaging and CT angiography (CTA), this paper presents an innovative way to utilize the normal-dose scan as a priori information to induce signal restoration of the current low-dose CT image series. Methods: Unlike conventional local operations on neighboring image voxels, nonlocal means (NLM) algorithm utilizes the redundancy of information across the whole image. This paper adapts the NLM to utilize the redundancy of information in the previous normal-dose scan and further exploits ways to optimize the nonlocal weights for low-dose image restoration in the NLM framework. The resulting algorithm is called the previous normal-dose scan induced nonlocal means (ndiNLM). Because of the optimized nature of nonlocal weights calculation, the ndiNLM algorithm does not depend heavily on image registration between the current low-dose and the previous normal-dose CT scans. Furthermore, the smoothing parameter involved in the ndiNLM algorithm can be adaptively estimated based on the image noise relationship between the current low-dose and the previous normal-dose scanning protocols. Results: Qualitative and quantitative evaluations were carried out on a physical phantom as well as clinical abdominal and brain perfusion CT scans in terms of accuracy and resolution properties. The gain by the use

  15. Aggressive time step selection for the time asymptotic velocity diffusion problem

    International Nuclear Information System (INIS)

    Hewett, D.W.; Krapchev, V.B.; Hizanidis, K.; Bers, A.

    1984-12-01

    An aggressive time step selector for an ADI algorithm is preseneted that is applied to the linearized 2-D Fokker-Planck equation including an externally imposed quasilinear diffusion term. This method provides a reduction in CPU requirements by factors of two or three compared to standard ADI. More important, the robustness of the procedure greatly reduces the work load of the user. The procedure selects a nearly optimal Δt with a minimum of intervention by the user thus relieving the need to supervise the algorithm. In effect, the algorithm does its own supervision by discarding time steps made with Δt too large

  16. A step-defined sedentary lifestyle index: <5000 steps/day.

    Science.gov (United States)

    Tudor-Locke, Catrine; Craig, Cora L; Thyfault, John P; Spence, John C

    2013-02-01

    Step counting (using pedometers or accelerometers) is widely accepted by researchers, practitioners, and the general public. Given the mounting evidence of the link between low steps/day and time spent in sedentary behaviours, how few steps/day some populations actually perform, and the growing interest in the potentially deleterious effects of excessive sedentary behaviours on health, an emerging question is "How many steps/day are too few?" This review examines the utility, appropriateness, and limitations of using a reoccurring candidate for a step-defined sedentary lifestyle index: 10 000) to lower (sedentary lifestyle index for adults is appropriate for researchers and practitioners and for communicating with the general public. There is little evidence to advocate any specific value indicative of a step-defined sedentary lifestyle index in children and adolescents.

  17. Optimization procedures in mammography: First results

    International Nuclear Information System (INIS)

    Espana Lopez, M. L.; Marcos de Paz, L.; Martin Rincon, C.; Jerez Sainz, I.; Lopez Franco, M. P.

    2001-01-01

    Optimization procedures in mammography using equipment with a unique target/filter combination can be carried out through such diverse factors as target optical density, technique factors for exposure, screen film combination or processing cycle, in order to obtain an image adequate for the diagnosis with an acceptable risk benefit balance. Diverse studies show an increase in the Standardised Detection Rate of invasive carcinomas with an increase in the optical density among others factors. In our hospital an optimisation process has been established, and as previous step, the target optical density has been increased up to 1,4 DO. The aim of this paper is to value the impact of optical density variation as much in the quality of image as in the entrance surface dose and the average dose to the glandular tissue, comparing them with the results obtained in a previous study. The study has been carried out in a sample of 106 patients, with an average age of 53,4 years, considering 212 clinical images corresponding to the two projections of a same breast with an average compressed thickness of 4,86 cm. An increase of 16,6% on the entrance surface dose and 18% on the average dose to the glandular tissue has been recorded. All the clinical images has been evaluated for the physician as adequate for diagnosis. (Author) 16 refs

  18. Control parameter optimization for AP1000 reactor using Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Wang, Pengfei; Wan, Jiashuang; Luo, Run; Zhao, Fuyu; Wei, Xinyu

    2016-01-01

    Highlights: • The PSO algorithm is applied for control parameter optimization of AP1000 reactor. • Key parameters of the MSHIM control system are optimized. • Optimization results are evaluated though simulations and quantitative analysis. - Abstract: The advanced mechanical shim (MSHIM) core control strategy is implemented in the AP1000 reactor for core reactivity and axial power distribution control simultaneously. The MSHIM core control system can provide superior reactor control capabilities via automatic rod control only. This enables the AP1000 to perform power change operations automatically without the soluble boron concentration adjustments. In this paper, the Particle Swarm Optimization (PSO) algorithm has been applied for the parameter optimization of the MSHIM control system to acquire better reactor control performance for AP1000. System requirements such as power control performance, control bank movement and AO control constraints are reflected in the objective function. Dynamic simulations are performed based on an AP1000 reactor simulation platform in each iteration of the optimization process to calculate the fitness values of particles in the swarm. The simulation platform is developed in Matlab/Simulink environment with implementation of a nodal core model and the MSHIM control strategy. Based on the simulation platform, the typical 10% step load decrease transient from 100% to 90% full power is simulated and the objective function used for control parameter tuning is directly incorporated in the simulation results. With successful implementation of the PSO algorithm in the control parameter optimization of AP1000 reactor, four key parameters of the MSHIM control system are optimized. It has been demonstrated by the calculation results that the optimized MSHIM control system parameters can improve the reactor power control capability and reduce the control rod movement without compromising AO control. Therefore, the PSO based optimization

  19. Topology optimization of radio frequency and microwave structures

    DEFF Research Database (Denmark)

    Aage, Niels

    in this thesis, concerns the optimization of devices for wireless energy transfer via strongly coupled magnetic resonators. A single design problem is considered to demonstrate proof of concept. The resulting design illustrates the possibilities of the optimization method, but also reveals its numerical...... of efficient antennas and power supplies. A topology optimization methodology is proposed based on a design parameterization which incorporates the skin effect. The numerical optimization procedure is implemented in Matlab, for 2D problems, and in a parallel C++ optimization framework, for 3D design problems...... formalism, a two step optimization procedure is presented. This scheme is applied to the design and optimization of a hemispherical sub-wavelength antenna. The optimized antenna configuration displayed a ratio of radiated power to input power in excess of 99 %. The third, and last, design problem considered...

  20. Gradient Optimization for Analytic conTrols - GOAT

    Science.gov (United States)

    Assémat, Elie; Machnes, Shai; Tannor, David; Wilhelm-Mauch, Frank

    Quantum optimal control becomes a necessary step in a number of studies in the quantum realm. Recent experimental advances showed that superconducting qubits can be controlled with an impressive accuracy. However, most of the standard optimal control algorithms are not designed to manage such high accuracy. To tackle this issue, a novel quantum optimal control algorithm have been introduced: the Gradient Optimization for Analytic conTrols (GOAT). It avoids the piecewise constant approximation of the control pulse used by standard algorithms. This allows an efficient implementation of very high accuracy optimization. It also includes a novel method to compute the gradient that provides many advantages, e.g. the absence of backpropagation or the natural route to optimize the robustness of the control pulses. This talk will present the GOAT algorithm and a few applications to transmons systems.

  1. Comparing light sensitivity, linearity and step response of electronic cameras for ophthalmology.

    Science.gov (United States)

    Kopp, O; Markert, S; Tornow, R P

    2002-01-01

    To develop and test a procedure to measure and compare light sensitivity, linearity and step response of electronic cameras. The pixel value (PV) of digitized images as a function of light intensity (I) was measured. The sensitivity was calculated from the slope of the P(I) function, the linearity was estimated from the correlation coefficient of this function. To measure the step response, a short sequence of images was acquired. During acquisition, a light source was switched on and off using a fast shutter. The resulting PV was calculated for each video field of the sequence. A CCD camera optimized for the near-infrared (IR) spectrum showed the highest sensitivity for both, visible and IR light. There are little differences in linearity. The step response depends on the procedure of integration and read out.

  2. Initiating statistical maintenance optimization

    International Nuclear Information System (INIS)

    Doyle, E. Kevin; Tuomi, Vesa; Rowley, Ian

    2007-01-01

    Since the 1980 s maintenance optimization has been centered around various formulations of Reliability Centered Maintenance (RCM). Several such optimization techniques have been implemented at the Bruce Nuclear Station. Further cost refinement of the Station preventive maintenance strategy includes evaluation of statistical optimization techniques. A review of successful pilot efforts in this direction is provided as well as initial work with graphical analysis. The present situation reguarding data sourcing, the principle impediment to use of stochastic methods in previous years, is discussed. The use of Crowe/AMSAA (Army Materials Systems Analysis Activity) plots is demonstrated from the point of view of justifying expenditures in optimization efforts. (author)

  3. A simple method to optimize HMC performance

    CERN Document Server

    Bussone, Andrea; Drach, Vincent; Hansen, Martin; Hietanen, Ari; Rantaharju, Jarno; Pica, Claudio

    2016-01-01

    We present a practical strategy to optimize a set of Hybrid Monte Carlo parameters in simulations of QCD and QCD-like theories. We specialize to the case of mass-preconditioning, with multiple time-step Omelyan integrators. Starting from properties of the shadow Hamiltonian we show how the optimal setup for the integrator can be chosen once the forces and their variances are measured, assuming that those only depend on the mass-preconditioning parameter.

  4. Astronomical sketching a step-by-step introduction

    CERN Document Server

    Handy, Richard; Perez, Jeremy; Rix, Erika; Robbins, Sol

    2007-01-01

    This book presents the amateur with fine examples of astronomical sketches and step-by-step tutorials in each medium, from pencil to computer graphics programs. This unique book can teach almost anyone to create beautiful sketches of celestial objects.

  5. Optimization problem in quantum cryptography

    International Nuclear Information System (INIS)

    Brandt, Howard E

    2003-01-01

    A complete optimization was recently performed, yielding the maximum information gain by a general unitary entangling probe in the four-state protocol of quantum cryptography. A larger set of optimum probe parameters was found than was known previously from an incomplete optimization. In the present work, a detailed comparison is made between the complete and incomplete optimizations. Also, a new set of optimum probe parameters is identified for the four-state protocol

  6. Control Software for Piezo Stepping Actuators

    Science.gov (United States)

    Shields, Joel F.

    2013-01-01

    A control system has been developed for the Space Interferometer Mission (SIM) piezo stepping actuator. Piezo stepping actuators are novel because they offer extreme dynamic range (centimeter stroke with nanometer resolution) with power, thermal, mass, and volume advantages over existing motorized actuation technology. These advantages come with the added benefit of greatly reduced complexity in the support electronics. The piezo stepping actuator consists of three fully redundant sets of piezoelectric transducers (PZTs), two sets of brake PZTs, and one set of extension PZTs. These PZTs are used to grasp and move a runner attached to the optic to be moved. By proper cycling of the two brake and extension PZTs, both forward and backward moves of the runner can be achieved. Each brake can be configured for either a power-on or power-off state. For SIM, the brakes and gate of the mechanism are configured in such a manner that, at the end of the step, the actuator is in a parked or power-off state. The control software uses asynchronous sampling of an optical encoder to monitor the position of the runner. These samples are timed to coincide with the end of the previous move, which may consist of a variable number of steps. This sampling technique linearizes the device by avoiding input saturation of the actuator and makes latencies of the plant vanish. The software also estimates, in real time, the scale factor of the device and a disturbance caused by cycling of the brakes. These estimates are used to actively cancel the brake disturbance. The control system also includes feedback and feedforward elements that regulate the position of the runner to a given reference position. Convergence time for smalland medium-sized reference positions (less than 200 microns) to within 10 nanometers can be achieved in under 10 seconds. Convergence times for large moves (greater than 1 millimeter) are limited by the step rate.

  7. Parameters Optimization and Application to Glutamate Fermentation Model Using SVM

    OpenAIRE

    Zhang, Xiangsheng; Pan, Feng

    2015-01-01

    Aimed at the parameters optimization in support vector machine (SVM) for glutamate fermentation modelling, a new method is developed. It optimizes the SVM parameters via an improved particle swarm optimization (IPSO) algorithm which has better global searching ability. The algorithm includes detecting and handling the local convergence and exhibits strong ability to avoid being trapped in local minima. The material step of the method was shown. Simulation experiments demonstrate the effective...

  8. Self-triggered assistive stimulus training improves step initiation in persons with Parkinson’s disease

    Directory of Open Access Journals (Sweden)

    Creath Robert A

    2013-01-01

    Full Text Available Abstract Background Prior studies demonstrated that hesitation-prone persons with Parkinson’s disease (PDs acutely improve step initiation using a novel self-triggered stimulus that enhances lateral weight shift prior to step onset. PDs showed reduced anticipatory postural adjustment (APA durations, earlier step onsets, and faster 1st step speed immediately following stimulus exposure. Objective This study investigated the effects of long-term stimulus exposure. Methods Two groups of hesitation-prone subjects with Parkinson’s disease (PD participated in a 6-week step-initiation training program involving one of two stimulus conditions: 1 Drop. The stance-side support surface was lowered quickly (1.5 cm; 2 Vibration. A short vibration (100 ms was applied beneath the stance-side support surface. Stimuli were self-triggered by a 5% reduction in vertical force under the stance foot during the APA. Testing was at baseline, immediately post-training, and 6 weeks post-training. Measurements included timing and magnitude of ground reaction forces, and step speed and length. Results Both groups improved their APA force modulation after training. Contrary to previous results, neither group showed reduced APA durations or earlier step onset times. The vibration group showed 55% increase in step speed and a 39% increase in step length which were retained 6 weeks post-training. The drop group showed no stepping-performance improvements. Conclusions The acute sensitivity to the quickness-enhancing effects of stimulus exposure demonstrated in previous studies was supplanted by improved force modulation following prolonged stimulus exposure. The results suggest a potential approach to reduce the severity of start hesitation in PDs, but further study is needed to understand the relationship between short- and long-term effects of stimulus exposure.

  9. Studying the varied shapes of gold clusters by an elegant optimization algorithm that hybridizes the density functional tight-binding theory and the density functional theory

    Science.gov (United States)

    Yen, Tsung-Wen; Lim, Thong-Leng; Yoon, Tiem-Leong; Lai, S. K.

    2017-11-01

    We combined a new parametrized density functional tight-binding (DFTB) theory (Fihey et al. 2015) with an unbiased modified basin hopping (MBH) optimization algorithm (Yen and Lai 2015) and applied it to calculate the lowest energy structures of Au clusters. From the calculated topologies and their conformational changes, we find that this DFTB/MBH method is a necessary procedure for a systematic study of the structural development of Au clusters but is somewhat insufficient for a quantitative study. As a result, we propose an extended hybridized algorithm. This improved algorithm proceeds in two steps. In the first step, the DFTB theory is employed to calculate the total energy of the cluster and this step (through running DFTB/MBH optimization for given Monte-Carlo steps) is meant to efficiently bring the Au cluster near to the region of the lowest energy minimum since the cluster as a whole has explicitly considered the interactions of valence electrons with ions, albeit semi-quantitatively. Then, in the second succeeding step, the energy-minimum searching process will continue with a skilledly replacement of the energy function calculated by the DFTB theory in the first step by one calculated in the full density functional theory (DFT). In these subsequent calculations, we couple the DFT energy also with the MBH strategy and proceed with the DFT/MBH optimization until the lowest energy value is found. We checked that this extended hybridized algorithm successfully predicts the twisted pyramidal structure for the Au40 cluster and correctly confirms also the linear shape of C8 which our previous DFTB/MBH method failed to do so. Perhaps more remarkable is the topological growth of Aun: it changes from a planar (n =3-11) → an oblate-like cage (n =12-15) → a hollow-shape cage (n =16-18) and finally a pyramidal-like cage (n =19, 20). These varied forms of the cluster's shapes are consistent with those reported in the literature.

  10. Application of stepping motor

    International Nuclear Information System (INIS)

    1980-10-01

    This book is divided into three parts, which is about practical using of stepping motor. The first part has six chapters. The contents of the first part are about stepping motor, classification of stepping motor, basic theory og stepping motor, characteristic and basic words, types and characteristic of stepping motor in hybrid type and basic control of stepping motor. The second part deals with application of stepping motor with hardware of stepping motor control, stepping motor control by microcomputer and software of stepping motor control. The last part mentions choice of stepping motor system, examples of stepping motor, measurement of stepping motor and practical cases of application of stepping motor.

  11. Internship guide : Work placements step by step

    NARCIS (Netherlands)

    Haag, Esther

    2013-01-01

    Internship Guide: Work Placements Step by Step has been written from the practical perspective of a placement coordinator. This book addresses the following questions : what problems do students encounter when they start thinking about the jobs their degree programme prepares them for? How do you

  12. Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm

    Science.gov (United States)

    Anam, S.

    2017-10-01

    Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.

  13. Short-Term Wind Speed Forecasting Using Support Vector Regression Optimized by Cuckoo Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jianzhou Wang

    2015-01-01

    Full Text Available This paper develops an effectively intelligent model to forecast short-term wind speed series. A hybrid forecasting technique is proposed based on recurrence plot (RP and optimized support vector regression (SVR. Wind caused by the interaction of meteorological systems makes itself extremely unsteady and difficult to forecast. To understand the wind system, the wind speed series is analyzed using RP. Then, the SVR model is employed to forecast wind speed, in which the input variables are selected by RP, and two crucial parameters, including the penalties factor and gamma of the kernel function RBF, are optimized by various optimization algorithms. Those optimized algorithms are genetic algorithm (GA, particle swarm optimization algorithm (PSO, and cuckoo optimization algorithm (COA. Finally, the optimized SVR models, including COA-SVR, PSO-SVR, and GA-SVR, are evaluated based on some criteria and a hypothesis test. The experimental results show that (1 analysis of RP reveals that wind speed has short-term predictability on a short-term time scale, (2 the performance of the COA-SVR model is superior to that of the PSO-SVR and GA-SVR methods, especially for the jumping samplings, and (3 the COA-SVR method is statistically robust in multi-step-ahead prediction and can be applied to practical wind farm applications.

  14. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    Science.gov (United States)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  15. Two-step sintering of ultrafine-grained barium cerate proton conducting ceramics

    International Nuclear Information System (INIS)

    Wang, Siwei; Zhang, Lei; Zhang, Lingling; Brinkman, Kyle; Chen, Fanglin

    2013-01-01

    Ultra-fine grained dense BaZr 0.1 Ce 0.7 Y 0.1 Yb 0.1 O 3−δ (BZCYYb) ceramics have been successfully prepared via a two-step sintering method. Co-precipitation method has been adopted to prepare nano-sized BZCYYb precursors with an average particle size of 30 nm. By controlling the sintering profile, an average grain size of 184 nm was obtained for dense BZCYYb ceramics via the two-step sintering method, compared to 445 nm for the conventional sintered samples. The two-step sintered BZCYYb samples showed less impurity and an enhanced electrical conductivity compared with the conventional sintered ones. Further, the two-step sintering method was applied to fabricate anode supported solid oxide fuel cells (SOFCs) using BZCYYb as the electrolyte, resulting in dense ultrafine-grained electrolyte membranes and porous anode substrates with fine particles. Due to the reduced ohmic as well as polarization resistances, the maximum power output of the cells fabricated from the two-step sintering method reached 349 mW m −2 at 700 °C, significantly improved from 172 mW cm −2 for the conventional sintered cells, suggesting that two-step sintering method is very promising for optimizing the microstructure and thus enhancing the electrochemical performances for barium cerate based proton-conducting SOFCs.

  16. Microsoft Office professional 2010 step by step

    CERN Document Server

    Cox, Joyce; Frye, Curtis

    2011-01-01

    Teach yourself exactly what you need to know about using Office Professional 2010-one step at a time! With STEP BY STEP, you build and practice new skills hands-on, at your own pace. Covering Microsoft Word, PowerPoint, Outlook, Excel, Access, Publisher, and OneNote, this book will help you learn the core features and capabilities needed to: Create attractive documents, publications, and spreadsheetsManage your e-mail, calendar, meetings, and communicationsPut your business data to workDevelop and deliver great presentationsOrganize your ideas and notes in one placeConnect, share, and accom

  17. Design and Optimization of Tube Type Interior Permanent Magnets Generator for Free Piston Applications

    Directory of Open Access Journals (Sweden)

    Serdal ARSLAN

    2017-05-01

    Full Text Available In this study a design and optimization of a generator to be used in free piston applications was made. In order to supply required initial force, an IPM (interior permanent magnets cavity tube type linear generator was selected. By using analytical equations’ basic dimensioning of generator was made. By using Ansys-Maxwell dimensioning, analysis and optimization of the generator was realized. Also, the effects of design basic variables (pole step ratio, cavity step ratio, inner diameter - outer diameter ratio, primary final length, air interval on pinking force were examined by using parametric analyses. Among these variables, cavity step ratio, inner diameter - outer diameter ratio, primary final length were optimally determined by algorithm and sequential nonlinear programming. The two methods were compared in terms of pinking force calculation problem. Preliminary application of the linear generator was performed for free piston application.

  18. An examination of the number of required apertures for step-and-shoot IMRT

    International Nuclear Information System (INIS)

    Jiang, Z; Earl, M A; Zhang, G W; Yu, C X; Shepard, D M

    2005-01-01

    We have examined the degree to which step-and-shoot IMRT treatment plans can be simplified (using a small number of apertures) without sacrificing the dosimetric quality of the plans. A key element of this study was the use of direct aperture optimization (DAO), an inverse planning technique where all of the multi-leaf collimator constraints are incorporated into the optimization. For seven cases (1 phantom, 1 prostate, 3 head-and-neck and 2 lung), DAO was used to perform a series of optimizations where the number of apertures per beam direction varied from 1 to 15. In this work, we attempt to provide general guidelines for how many apertures per beam direction are sufficient for various clinical cases using DAO. Analysis of the optimized treatment plans reveals that for most cases, only modest improvements in the objective function and the corresponding DVHs are seen beyond 5 apertures per beam direction. However, for more complex cases, some dosimetric gain can be achieved by increasing the number of apertures per beam direction beyond 5. Even in these cases, however, only modest improvements are observed beyond 9 apertures per beam direction. In our clinical experience, 38 out of the first 40 patients treated using IMRT plans produced using DAO were treated with 9 or fewer apertures per beam direction. The results indicate that many step-and-shoot IMRT treatment plans delivered today are more complex than necessary and can be simplified without sacrificing plan quality

  19. Reliability-Based Structural Optimization of Wave Energy Converters

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kramer, Morten; Sørensen, John Dalsgaard

    2014-01-01

    More and more wave energy converter (WEC) concepts are reaching prototype level. Once the prototype level is reached, the next step in order to further decrease the levelized cost of energy (LCOE) is optimizing the overall system with a focus on structural and maintenance (inspection) costs......, as well as on the harvested power from the waves. The target of a fully-developed WEC technology is not maximizing its power output, but minimizing the resulting LCOE. This paper presents a methodology to optimize the structural design of WECs based on a reliability-based optimization problem...

  20. Densities of accessible final states for multi-step compound reactions

    International Nuclear Information System (INIS)

    Maoming De; Guo Hua

    1993-01-01

    The densities of accessible final states for calculations of multi-step compound reactions are derived. The Pauli exclusion principle is taken into account in the calculations. The results are compared with a previous author's results and the effect of the Pauli exclusion principle is investigated. (Author)

  1. The Optimal Performance of Employees

    Directory of Open Access Journals (Sweden)

    Marta Pureber

    2000-12-01

    Full Text Available The Revoz company set itself the following task: we will enable also our blue colllar workers to improve their ski lls and be promoted. So we started implementing a project of step-by-step education, The Optimal Performance of Employees. Improving the workers' knowledges and skills guarantees higher independence, responsibility, faster development of organisation structure and more trust between the employees because of better communication in bas ic working units. The Optimal Performance program offers blue collar workers a possibility to  improve their professional skills, to adapt themselves to changes in managing, organisation, technology and new approaches to their tasks. The program is based on the following principles: • voluntariness-every worker can participate; • adapted pedagogical approach - based on routine workers' activities, the rhythm of education is adapted to their abilities of absorbing new knowledges; • including of managerial structure - before, du ring and after education; • connection with working environment - the contents of education are linked to a specific working environment.

  2. A Statistical-Probabilistic Pattern for Determination of Tunnel Advance Step by Quantitative Risk Analysis

    Directory of Open Access Journals (Sweden)

    sasan ghorbani

    2017-12-01

    Full Text Available One of the main challenges faced in design and construction phases of tunneling projects is the determination of maximum allowable advance step to maximize excavation rate and reduce project delivery time. Considering the complexity of determining this factor and unexpected risks associated with inappropriate determination of that, it is necessary to employ a method which is capable of accounting for interactions among uncertain geotechnical parameters and advance step. The main objective in the present research is to undertake optimization and risk management of advance step length in water diversion tunnel at Shahriar Dam based on uncertainty of geotechnical parameters following a statistic-probabilistic approach. In the present research, in order to determine optimum advance step for excavation operation, two hybrid methods were used: strength reduction method-discrete element method- Monte Carlo simulation (SRM/DEM/MCS and strength reduction method- discrete element method- point estimate method (SRM/DEM/PEM. Moreover, Taguchi analysis was used to investigate the sensitivity of advance step to changes in statistical distribution function of input parameters under three tunneling scenarios at sections of poor to good qualities (as per RMR classification system. Final results implied the optimality of the advance step defined in scenario 2 where 2 m advance per excavation round was proposed, according to shear strain criterion and SRM/DEM/MCS, with minimum failure probability and risk of 8.05% and 75281.56 $, respectively, at 95% confidence level. Moreover, in either of normal, lognormal, and gamma distributions, as the advance step increased from Scenario 1 to 2, failure probability was observed to increase at lower rate than that observed when advance step in scenario 2 was increased to that In Scenario 3. In addition, Taguchi tests were subjected to signal-to-noise analysis and the results indicated that, considering the three statistical

  3. Two-speed phacoemulsification for soft cataracts using optimized parameters and procedure step toolbar with the CENTURION Vision System and Balanced Tip.

    Science.gov (United States)

    Davison, James A

    2015-01-01

    To present a cause of posterior capsule aspiration and a technique using optimized parameters to prevent it from happening when operating soft cataracts. A prospective list of posterior capsule aspiration cases was kept over 4,062 consecutive cases operated with the Alcon CENTURION machine and Balanced Tip. Video analysis of one case of posterior capsule aspiration was accomplished. A surgical technique was developed using empirically derived machine parameters and customized setting-selection procedure step toolbar to reduce the pace of aspiration of soft nuclear quadrants in order to prevent capsule aspiration. Two cases out of 3,238 experienced posterior capsule aspiration before use of the soft quadrant technique. Video analysis showed an attractive vortex effect with capsule aspiration occurring in 1/5 of a second. A soft quadrant removal setting was empirically derived which had a slower pace and seemed more controlled with no capsule aspiration occurring in the subsequent 824 cases. The setting featured simultaneous linear control from zero to preset maximums for: aspiration flow, 20 mL/min; and vacuum, 400 mmHg, with the addition of torsional tip amplitude up to 20% after the fluidic maximums were achieved. A new setting selection procedure step toolbar was created to increase intraoperative flexibility by providing instantaneous shifting between the soft and normal settings. A technique incorporating a reduced pace for soft quadrant acquisition and aspiration can be accomplished through the use of a dedicated setting of integrated machine parameters. Toolbar placement of the procedure button next to the normal setting procedure button provides the opportunity to instantaneously alternate between the two settings. Simultaneous surgeon control over vacuum, aspiration flow, and torsional tip motion may make removal of soft nuclear quadrants more efficient and safer.

  4. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    Science.gov (United States)

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of

  5. A QFD-based optimization method for a scalable product platform

    Science.gov (United States)

    Luo, Xinggang; Tang, Jiafu; Kwong, C. K.

    2010-02-01

    In order to incorporate the customer into the early phase of the product development cycle and to better satisfy customers' requirements, this article adopts quality function deployment (QFD) for optimal design of a scalable product platform. A five-step QFD-based method is proposed to determine the optimal values for platform engineering characteristics (ECs) and non-platform ECs of the products within a product family. First of all, the houses of quality (HoQs) for all product variants are developed and a QFD-based optimization approach is used to determine the optimal ECs for each product variant. Sensitivity analysis is performed for each EC with respect to overall customer satisfaction (OCS). Based on the obtained sensitivity indices of ECs, a mathematical model is established to simultaneously optimize the values of the platform and the non-platform ECs. Finally, by comparing and analysing the optimal solutions with different number of platform ECs, the ECs with which the worst OCS loss can be avoided are selected as platform ECs. An illustrative example is used to demonstrate the feasibility of this method. A comparison between the proposed method and a two-step approach is conducted on the example. The comparison shows that, as a kind of single-stage approach, the proposed method yields better average degree of customer satisfaction due to the simultaneous optimization of platform and non-platform ECs.

  6. Simulation and Optimization of Control of Selected Phases of Gyroplane Flight

    Directory of Open Access Journals (Sweden)

    Wienczyslaw Stalewski

    2018-02-01

    Full Text Available Optimization methods are increasingly used to solve problems in aeronautical engineering. Typically, optimization methods are utilized in the design of an aircraft airframe or its structure. The presented study is focused on improvement of aircraft flight control procedures through numerical optimization. The optimization problems concern selected phases of flight of a light gyroplane—a rotorcraft using an unpowered rotor in autorotation to develop lift and an engine-powered propeller to provide thrust. An original methodology of computational simulation of rotorcraft flight was developed and implemented. In this approach the aircraft motion equations are solved step-by-step, simultaneously with the solution of the Unsteady Reynolds-Averaged Navier–Stokes equations, which is conducted to assess aerodynamic forces acting on the aircraft. As a numerical optimization method, the BFGS (Broyden–Fletcher–Goldfarb–Shanno algorithm was adapted. The developed methodology was applied to optimize the flight control procedures in selected stages of gyroplane flight in direct proximity to the ground, where proper control of the aircraft is critical to ensure flight safety and performance. The results of conducted computational optimizations proved the qualitative correctness of the developed methodology. The research results can be helpful in the design of easy-to-control gyroplanes and also in the training of pilots for this type of rotorcraft.

  7. Multi-step wind speed forecasting based on a hybrid forecasting architecture and an improved bat algorithm

    International Nuclear Information System (INIS)

    Xiao, Liye; Qian, Feng; Shao, Wei

    2017-01-01

    Highlights: • Propose a hybrid architecture based on a modified bat algorithm for multi-step wind speed forecasting. • Improve the accuracy of multi-step wind speed forecasting. • Modify bat algorithm with CG to improve optimized performance. - Abstract: As one of the most promising sustainable energy sources, wind energy plays an important role in energy development because of its cleanliness without causing pollution. Generally, wind speed forecasting, which has an essential influence on wind power systems, is regarded as a challenging task. Analyses based on single-step wind speed forecasting have been widely used, but their results are insufficient in ensuring the reliability and controllability of wind power systems. In this paper, a new forecasting architecture based on decomposing algorithms and modified neural networks is successfully developed for multi-step wind speed forecasting. Four different hybrid models are contained in this architecture, and to further improve the forecasting performance, a modified bat algorithm (BA) with the conjugate gradient (CG) method is developed to optimize the initial weights between layers and thresholds of the hidden layer of neural networks. To investigate the forecasting abilities of the four models, the wind speed data collected from four different wind power stations in Penglai, China, were used as a case study. The numerical experiments showed that the hybrid model including the singular spectrum analysis and general regression neural network with CG-BA (SSA-CG-BA-GRNN) achieved the most accurate forecasting results in one-step to three-step wind speed forecasting.

  8. Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio

    International Nuclear Information System (INIS)

    Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik; Suzuki, Mitsutoshi

    2014-01-01

    Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based on the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided

  9. Structural Optimization of non-Newtonian Microfluidics

    DEFF Research Database (Denmark)

    Jensen, Kristian Ejlebjærg; Okkels, Fridolin

    2011-01-01

    We present results for topology optimization of a non-Newtonian rectifier described with a differential constitutive model. The results are novel in the sense that a differential constitutive model has not been combined with topology optimization previously. We find that it is necessary to apply...... optimization of fluids. We test the method on a microfluidic rectifier and find solutions topologically different from experimentally realized designs....

  10. Optimal choice of basis functions in the linear regression analysis

    International Nuclear Information System (INIS)

    Khotinskij, A.M.

    1988-01-01

    Problem of optimal choice of basis functions in the linear regression analysis is investigated. Step algorithm with estimation of its efficiency, which holds true at finite number of measurements, is suggested. Conditions, providing the probability of correct choice close to 1 are formulated. Application of the step algorithm to analysis of decay curves is substantiated. 8 refs

  11. Two-step flash light sintering process for crack-free inkjet-printed Ag films

    International Nuclear Information System (INIS)

    Park, Sung-Hyeon; Kim, Hak-Sung; Jang, Shin; Lee, Dong-Jun; Oh, Jehoon

    2013-01-01

    In this paper, a two-step flash light sintering process for inkjet-printed Ag films is investigated with the aim of improving the quality of sintered Ag films. The flash light sintering process is divided into two steps: a preheating step and a main sintering step. The preheating step is used to remove the organic binder without abrupt vaporization. The main sintering step is used to complete the necking connections among the silver nanoparticles and achieve high electrical conductivity. The process minimizes the damage on the polymer substrate and the interface between the sintered Ag film and polymer substrate. The electrical conductivity is calculated by measuring the resistance and cross-sectional area with an LCR meter and 3D optical profiler, respectively. It is found that the resistivity of the optimal flash light-sintered Ag films (36.32 nΩ m), which is 228.86% of that of bulk silver, is lower than that of thermally sintered ones (40.84 nΩ m). Additionally, the polyimide film used as the substrate is preserved with the inkjet-printed pattern shape during the flash light sintering process without delamination or defects. (paper)

  12. A Statistical Approach to Optimizing Concrete Mixture Design

    OpenAIRE

    Ahmad, Shamsad; Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicate...

  13. Numerical optimization using flow equations

    Science.gov (United States)

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  14. Divertor design through shape optimization

    International Nuclear Information System (INIS)

    Dekeyser, W.; Baelmans, M.; Reiter, D.

    2012-01-01

    Due to the conflicting requirements, complex physical processes and large number of design variables, divertor design for next step fusion reactors is a challenging problem, often relying on large numbers of computationally expensive numerical simulations. In this paper, we attempt to partially automate the design process by solving an appropriate shape optimization problem. Design requirements are incorporated in a cost functional which measures the performance of a certain design. By means of changes in the divertor shape, which in turn lead to changes in the plasma state, this cost functional can be minimized. Using advanced adjoint methods, optimal solutions are computed very efficiently. The approach is illustrated by designing divertor targets for optimal power load spreading, using a simplified edge plasma model (copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  15. Honing process optimization algorithms

    Science.gov (United States)

    Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.

    2018-03-01

    This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.

  16. Required Steps of Managing International Equity Placement Strategic Alliance

    Directory of Open Access Journals (Sweden)

    Harimukti Wandebori

    2012-01-01

    Full Text Available The purpose of the research is to unravel the steps of managing international equity placement strategic alliance (IEPSA. The steps of managing an IEPSA are obtained by conducting theoretical review. The theoretical reviews consist of theory of strategic alliance; definition, classification, and finding definition of an IEPSA, political and analytical considerations and the necessary steps. These steps of managing IEPSA can be classified into analyzing of macro consideration, micro consideration, domestic company’s stakeholder support, cultural understanding, strategic planning, internal support, human resource management, organizational arrangement, management control system, evolved cultural understanding, and evaluating results. In this research, the domestic partners who formed the IEPSAs are limited to State-Owned Enterprises (SOEs. The IEPSA was one of the means of privatization. The research will be beneficial for both foreign and domestic partners who form an IEPSA in the previous SOEs. By knowing the steps of managing the IEPSA both partners will be able to secure a successful implementation of IEPSA. By identifying the steps of managing the IEPSA, the stakeholder will not see IEPSA as threat rather as an opportunity to improve performance, to create synergy, and generate benefits for both partners and stakeholder. By knowing the necessary steps of managing the IEPSA, the stakeholder including society and politician will envisage the IEPSA as a means of effectively improving the SOEs’ performances.The research was espected to provide contributions for the research on strategic alliances. Apparently, there exist no literatures discussing about IEPSA in the domain of strategic alliances. Keywords: strategic alliance, equity placement, international equity placement strategic alliance, privatization, steps of international equity placement strategic alliance, state-owned enterprises

  17. Optimal fringe angle selection for digital fringe projection technique.

    Science.gov (United States)

    Wang, Yajun; Zhang, Song

    2013-10-10

    Existing digital fringe projection (DFP) systems mainly use either horizontal or vertical fringe patterns for three-dimensional shape measurement. This paper reveals that these two fringe directions are usually not optimal where the phase change is the largest to a given depth variation. We propose a novel and efficient method to determine the optimal fringe angle by projecting a set of horizontal and vertical fringe patterns onto a step-height object and by further analyzing two resultant phase maps. Experiments demonstrate the existence of the optimal angle and the success of the proposed optimal angle determination method.

  18. Modified Two-Step Dimethyl Ether (DME Synthesis Simulation from Indonesian Brown Coal

    Directory of Open Access Journals (Sweden)

    Dwiwahju Sasongko

    2016-08-01

    Full Text Available A theoretical study was conducted to investigate the performance of dimethyl ether (DME synthesis from coal. This paper presents a model for two-step DME synthesis from brown coal represented by the following processes: drying, gasification, water-gas reaction, acid gas removal, and DME synthesis reactions. The results of the simulation suggest that a feedstock ratio of coal : oxygen : steam of 1 : 0.13 : 0.821 produces the highest DME concentration. The water-gas reactor simulation at a temperature of 400°C and a pressure of 20 bar gave the ratio of H2/CO closest to 2, the optimal value for two-step DME synthesis. As for the DME synthesis reactor simulation, high pressure and low temperature promote a high DME concentration. It is predicted that a temperature of 300°C and a pressure of 140 bar are the optimum conditions for the DME synthesis reaction. This study also showed that the DME concentration produced by the two-step route is higher than that produced by one-step DME synthesis, implying that further improvement and research are needed to apply two-step DME synthesis to production of this liquid fuel.

  19. Retrofitting of heat exchanger networks involving streams with variable heat capacity: Application of single and multi-objective optimization

    International Nuclear Information System (INIS)

    Sreepathi, Bhargava Krishna; Rangaiah, G.P.

    2015-01-01

    Heat exchanger network (HEN) retrofitting improves the energy efficiency of the current process by reducing external utilities. In this work, HEN retrofitting involving streams having variable heat capacity is studied. For this, enthalpy values of a stream are fitted to a continuous cubic polynomial instead of a stepwise approach employed in the previous studies [1,2]. The former methodology is closer to reality as enthalpy or heat capacity changes gradually instead of step changes. Using the polynomial fitting formulation, single objective optimization (SOO) and multi-objective optimization (MOO) of a HEN retrofit problem are investigated. The results obtained show an improvement in the utility savings, and MOO provides many Pareto-optimal solutions to choose from. Also, Pareto-optimal solutions involving area addition in existing heat exchangers only (but no new exchangers and no structural modifications) are found and provided for comparison with those involving new exchangers and structural modifications as well. - Highlights: • HEN retrofitting involving streams with variable heat capacities is studied. • A continuous approach to handle variable heat capacity is proposed and tested. • Better and practical solutions are obtained for HEN retrofitting in process plants. • Pareto-optimal solutions provide many alternate choices for HEN retrofitting

  20. Algorithms for optimal dyadic decision trees

    Energy Technology Data Exchange (ETDEWEB)

    Hush, Don [Los Alamos National Laboratory; Porter, Reid [Los Alamos National Laboratory

    2009-01-01

    A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.

  1. The Relationship Between Functional Movement, Balance Deficits, and Previous Injury History in Deploying Marine Warfighters.

    Science.gov (United States)

    de la Motte, Sarah J; Lisman, Peter; Sabatino, Marc; Beutler, Anthony I; OʼConnor, Francis G; Deuster, Patricia A

    2016-06-01

    Screening for primary musculoskeletal injury (MSK-I) is costly and time-consuming. Both the Functional Movement Screen (FMS) and the Y-Balance Test (YBT) have been shown to predict future MSK-I. With a goal of optimizing the efficiency of primary MSK-I screening, we studied associations between performance on the FMS and YBT and whether history of MSK-I influenced FMS and YBT scores. In total, 365 deploying Marines performed the FMS and YBT as prescribed. Composite and individual scores were each categorized as high risk or low risk using published injury thresholds: High-risk FMS included composite scores ≤14 and right-to-left (R/L) asymmetry for Shoulder Mobility, In-Line Lunge, Straight Leg Raise, Hurdle Step, or Rotary Stability. High-risk YBT consisted of anterior, posteromedial, and/or posterolateral R/L differences >4 cm and/or composite differences ≥12 cm. Pearson's χ tests evaluated associations between: (a) all FMS and YBT risk groups and (b) previous MSK-I and all FMS and YBT risk groups. Marines with high-risk FMS were twice as likely to have high-risk YBT posteromedial scores (χ = 10.2, p = 0.001; odds ratio [OR] = 2.1, 95% confidence interval [CI] = 1.3-3.2). History of any MSK-I was not associated with high-risk FMS or high-risk YBT. However, previous lower extremity MSK-I was associated with In-Line Lunge asymmetries (χ = 9.8, p = 0.002, OR = 2.2, 95% CI = 1.3-3.6). Overall, we found limited overlap in FMS and YBT risk. Because both methods seem to assess different risk factors for injury, we recommend FMS and YBT continue to be used together in combination with a thorough injury history until their predictive capacities are further established.

  2. Portfolio optimization and performance evaluation

    DEFF Research Database (Denmark)

    Juhl, Hans Jørn; Christensen, Michael

    2013-01-01

    Based on an exclusive business-to-business database comprising nearly 1,000 customers, the applicability of portfolio analysis is documented, and it is examined how such an optimization analysis can be used to explore the growth potential of a company. As opposed to any previous analyses, optimal...... customer portfolios are determined, and it is shown how marketing decision-makers can use this information in their marketing strategies to optimize the revenue growth of the company. Finally, our analysis is the first analysis which applies portfolio based methods to measure customer performance......, and it is shown how these performance measures complement the optimization analysis....

  3. Required Steps of Managing International Equity Placement Strategic Alliance

    Directory of Open Access Journals (Sweden)

    Harimukti Wandebori

    2011-12-01

    Full Text Available The purpose of the research is to unravel the steps of managing international equity placement strategic alliance (IEPSA. The steps of managing an IEPSA are obtained by conducting theoretical review. The theoretical reviews consist of theory of strategic alliance; definition, classification, and finding definition of an IEPSA, political and analytical considerations and the necessary steps. These steps of managing IEPSA can be classified into analyzing of macro consideration, micro consideration, domestic company’s stakeholder support, cultural understanding, strategic planning, internal support, human resource management, organizational arrangement, management control system, evolved cultural understanding, and evaluating results. In this research, the domestic partners who formed the IEPSAs are limited to State-Owned Enterprises (SOEs. The IEPSA was one of the means of privatization. The research will be beneficial for both foreign and domestic partners who form an IEPSA in the previous SOEs. By knowing the steps of managing the IEPSA both partners will be able to secure a successful implementation of IEPSA. By identifying the steps of managing the IEPSA, the stakeholder will not see IEPSA as threat rather as an opportunity to improve performance, to create synergy, and generate benefits for both partners and stakeholder. By knowing the necessary steps of managing the IEPSA, the stakeholder including society and politician will envisage the IEPSA as a means of effectively improving the SOEs’ performances.The research was espected to provide contributions for the research on strategic alliances. Apparently, there exist no literatures discussing about IEPSA in the domain of strategic alliances.

  4. Optimal pattern synthesis for speech recognition based on principal component analysis

    Science.gov (United States)

    Korsun, O. N.; Poliyev, A. V.

    2018-02-01

    The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.

  5. Robust Trajectory Optimization of a Ski Jumper for Uncertainty Influence and Safety Quantification

    Directory of Open Access Journals (Sweden)

    Patrick Piprek

    2018-02-01

    Full Text Available This paper deals with the development of a robust optimal control framework for a previously developed multi-body ski jumper simulation model by the authors. This framework is used to model uncertainties acting on the jumper during his jump, e.g., wind or mass, to enhance the performance, but also to increase the fairness and safety of the competition. For the uncertainty modeling the method of generalized polynomial chaos together with the discrete expansion by stochastic collocation is applied: This methodology offers a very flexible framework to model multiple uncertainties using a small number of required optimizations to calculate an uncertain trajectory. The results are then compared to the results of the Latin-Hypercube sampling method to show the correctness of the applied methods. Finally, the results are examined with respect to two major metrics: First, the influence of the uncertainties on the jumper, his positioning with respect to the air, and his maximal achievable flight distance are examined. Then, the results are used in a further step to quantify the safety of the jumper.

  6. Physical optimization of afterloading techniques

    International Nuclear Information System (INIS)

    Anderson, L.L.

    1985-01-01

    Physical optimization in brachytherapy refers to the process of determining the radioactive-source configuration which yields a desired dose distribution. In manually afterloaded intracavitary therapy for cervix cancer, discrete source strengths are selected iteratively to minimize the sum of squares of differences between trial and target doses. For remote afterloading with a stepping-source device, optimized (continuously variable) dwell times are obtained, either iteratively or analytically, to give least squares approximations to dose at an arbitrary number of points; in vaginal irradiation for endometrial cancer, the objective has included dose uniformity at applicator surface points in addition to a tapered contour of target dose at depth. For template-guided interstitial implants, seed placement at rectangular-grid mesh points may be least squares optimized within target volumes defined by computerized tomography; effective optimization is possible only for (uniform) seed strength high enough that the desired average peripheral dose is achieved with a significant fraction of empty seed locations. (orig.) [de

  7. Stimulated Brillouin scattering continuous wave phase conjugation in step-index fiber optics.

    Science.gov (United States)

    Massey, Steven M; Spring, Justin B; Russell, Timothy H

    2008-07-21

    Continuous wave (CW) stimulated Brillouin scattering (SBS) phase conjugation in step-index optical fibers was studied experimentally and modeled as a function of fiber length. A phase conjugate fidelity over 80% was measured from SBS in a 40 m fiber using a pinhole technique. Fidelity decreases with fiber length, and a fiber with a numerical aperture (NA) of 0.06 was found to generate good phase conjugation fidelity over longer lengths than a fiber with 0.13 NA. Modeling and experiment support previous work showing the maximum interaction length which yields a high fidelity phase conjugate beam is inversely proportional to the fiber NA(2), but find that fidelity remains high over much longer fiber lengths than previous models calculated. Conditions for SBS beam cleanup in step-index fibers are discussed.

  8. Discrepancies between selected Pareto optimal plans and final deliverable plans in radiotherapy multi-criteria optimization.

    Science.gov (United States)

    Kyroudi, Archonteia; Petersson, Kristoffer; Ghandour, Sarah; Pachoud, Marc; Matzinger, Oscar; Ozsahin, Mahmut; Bourhis, Jean; Bochud, François; Moeckli, Raphaël

    2016-08-01

    Multi-criteria optimization provides decision makers with a range of clinical choices through Pareto plans that can be explored during real time navigation and then converted into deliverable plans. Our study shows that dosimetric differences can arise between the two steps, which could compromise the clinical choices made during navigation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. The way to collisions, step by step

    CERN Multimedia

    2009-01-01

    While the LHC sectors cool down and reach the cryogenic operating temperature, spirits are warming up as we all eagerly await the first collisions. No reason to hurry, though. Making particles collide involves the complex manoeuvring of thousands of delicate components. The experts will make it happen using a step-by-step approach.

  10. Coastal aquifer management based on surrogate models and multi-objective optimization

    Science.gov (United States)

    Mantoglou, A.; Kourakos, G.

    2011-12-01

    The demand for fresh water in coastal areas and islands can be very high, especially in summer months, due to increased local needs and tourism. In order to satisfy demand, a combined management plan is proposed which involves: i) desalinization (if needed) of pumped water to a potable level using reverse osmosis and ii) injection of biologically treated waste water into the aquifer. The management plan is formulated into a multiobjective optimization framework, where simultaneous minimization of economic and environmental costs is desired; subject to a constraint to satisfy demand. The method requires modeling tools, which are able to predict the salinity levels of the aquifer in response to different alternative management scenarios. Variable density models can simulate the interaction between fresh and saltwater; however, they are computationally intractable when integrated in optimization algorithms. In order to alleviate this problem, a multi objective optimization algorithm is developed combining surrogate models based on Modular Neural Networks [MOSA(MNN)]. The surrogate models are trained adaptively during optimization based on a Genetic Algorithm. In the crossover step of the genetic algorithm, each pair of parents generates a pool of offspring. All offspring are evaluated based on the fast surrogate model. Then only the most promising offspring are evaluated based on the exact numerical model. This eliminates errors in Pareto solution due to imprecise predictions of the surrogate model. Three new criteria for selecting the most promising offspring were proposed, which improve the Pareto set and maintain the diversity of the optimum solutions. The method has important advancements compared to previous methods, e.g. alleviation of propagation of errors due to surrogate model approximations. The method is applied to a real coastal aquifer in the island of Santorini which is a very touristy island with high water demands. The results show that the algorithm

  11. Theory and applications for optimization of every part of a photovoltaic system

    Science.gov (United States)

    Redfield, D.

    1978-01-01

    A general method is presented for quantitatively optimizing the design of every part and fabrication step of an entire photovoltaic system, based on the criterion of minimum cost/Watt for the system output power. It is shown that no element or process step can be optimized properly by considering only its own cost and performance. Moreover, a fractional performance loss at any fabrication step within the cell or array produces the same fractional increase in the cost/Watt of the entire array, but not of the full system. One general equation is found to be capable of optimizing all parts of a system, although the cell and array steps are basically different from the power-handling elements. Applications of this analysis are given to show (1) when Si wafers should be cut to increase their packing fraction; and (2) what the optimum dimensions for solar cell metallizations are. The optimum shadow fraction of the fine grid is shown to be independent of metal cost and resistivity as well as cell size. The optimum thicknesses of both the fine grid and the bus bar are substantially greater than the values in general use, and the total array cost has a major effect on these values. By analogy, this analysis is adaptable to other solar energy systems.

  12. Multi-step ahead nonlinear identification of Lorenz's chaotic system using radial basis neural network with learning by clustering and particle swarm optimization

    International Nuclear Information System (INIS)

    Guerra, Fabio A.; Coelho, Leandro dos S.

    2008-01-01

    An important problem in engineering is the identification of nonlinear systems, among them radial basis function neural networks (RBF-NN) using Gaussian activation functions models, which have received particular attention due to their potential to approximate nonlinear behavior. Several design methods have been proposed for choosing the centers and spread of Gaussian functions and training the RBF-NN. The selection of RBF-NN parameters such as centers, spreads, and weights can be understood as a system identification problem. This paper presents a hybrid training approach based on clustering methods (k-means and c-means) to tune the centers of Gaussian functions used in the hidden layer of RBF-NNs. This design also uses particle swarm optimization (PSO) for centers (local clustering search method) and spread tuning, and the Penrose-Moore pseudoinverse for the adjustment of RBF-NN weight outputs. Simulations involving this RBF-NN design to identify Lorenz's chaotic system indicate that the performance of the proposed method is superior to that of the conventional RBF-NN trained for k-means and the Penrose-Moore pseudoinverse for multi-step ahead forecasting

  13. Invited Review Article: Measurement uncertainty of linear phase-stepping algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hack, Erwin [EMPA, Laboratory Electronics/Metrology/Reliability, Ueberlandstrasse 129, CH-8600 Duebendorf (Switzerland); Burke, Jan [Australian Centre for Precision Optics, CSIRO (Commonwealth Scientific and Industrial Research Organisation) Materials Science and Engineering, P.O. Box 218, Lindfield, NSW 2070 (Australia)

    2011-06-15

    Phase retrieval techniques are widely used in optics, imaging and electronics. Originating in signal theory, they were introduced to interferometry around 1970. Over the years, many robust phase-stepping techniques have been developed that minimize specific experimental influence quantities such as phase step errors or higher harmonic components of the signal. However, optimizing a technique for a specific influence quantity can compromise its performance with regard to others. We present a consistent quantitative analysis of phase measurement uncertainty for the generalized linear phase stepping algorithm with nominally equal phase stepping angles thereby reviewing and generalizing several results that have been reported in literature. All influence quantities are treated on equal footing, and correlations between them are described in a consistent way. For the special case of classical N-bucket algorithms, we present analytical formulae that describe the combined variance as a function of the phase angle values. For the general Arctan algorithms, we derive expressions for the measurement uncertainty averaged over the full 2{pi}-range of phase angles. We also give an upper bound for the measurement uncertainty which can be expressed as being proportional to an algorithm specific factor. Tabular compilations help the reader to quickly assess the uncertainties that are involved with his or her technique.

  14. Optimizing How We Teach Research Methods

    Science.gov (United States)

    Cvancara, Kristen E.

    2017-01-01

    Courses: Research Methods (undergraduate or graduate level). Objective: The aim of this exercise is to optimize the ability for students to integrate an understanding of various methodologies across research paradigms within a 15-week semester, including a review of procedural steps and experiential learning activities to practice each method, a…

  15. Parameters Optimization and Application to Glutamate Fermentation Model Using SVM

    Directory of Open Access Journals (Sweden)

    Xiangsheng Zhang

    2015-01-01

    Full Text Available Aimed at the parameters optimization in support vector machine (SVM for glutamate fermentation modelling, a new method is developed. It optimizes the SVM parameters via an improved particle swarm optimization (IPSO algorithm which has better global searching ability. The algorithm includes detecting and handling the local convergence and exhibits strong ability to avoid being trapped in local minima. The material step of the method was shown. Simulation experiments demonstrate the effectiveness of the proposed algorithm.

  16. Computation of Optimal Monotonicity Preserving General Linear Methods

    KAUST Repository

    Ketcheson, David I.

    2009-07-01

    Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.

  17. A Short-Term and High-Resolution System Load Forecasting Approach Using Support Vector Regression with Hybrid Parameters Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-25

    This work proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of the hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system.

  18. Optimized spray drying process for preparation of one-step calcium-alginate gel microspheres

    Energy Technology Data Exchange (ETDEWEB)

    Popeski-Dimovski, Riste [Department of physic, Faculty of Natural Sciences and Mathematics, “ss. Cyril and Methodius” University, Arhimedova 3, 1000 Skopje, R. Macedonia (Macedonia, The Former Yugoslav Republic of)

    2016-03-25

    Calcium-alginate micro particles have been used extensively in drug delivery systems. Therefore we establish a one-step method for preparation of internally gelated micro particles with spherical shape and narrow size distribution. We use four types of alginate with different G/M ratio and molar weight. The size of the particles is measured using light diffraction and scanning electron microscopy. Measurements showed that with this method, micro particles with size distribution around 4 micrometers can be prepared, and SEM imaging showed that those particles are spherical in shape.

  19. Overcoming double-step CO2 adsorption and minimizing water co-adsorption in bulky diamine-appended variants of Mg2(dobpdc).

    Science.gov (United States)

    Milner, Phillip J; Martell, Jeffrey D; Siegelman, Rebecca L; Gygi, David; Weston, Simon C; Long, Jeffrey R

    2018-01-07

    Alkyldiamine-functionalized variants of the metal-organic framework Mg 2 (dobpdc) (dobpdc 4- = 4,4'-dioxidobiphenyl-3,3'-dicarboxylate) are promising for CO 2 capture applications owing to their unique step-shaped CO 2 adsorption profiles resulting from the cooperative formation of ammonium carbamate chains. Primary , secondary (1°,2°) alkylethylenediamine-appended variants are of particular interest because of their low CO 2 step pressures (≤1 mbar at 40 °C), minimal adsorption/desorption hysteresis, and high thermal stability. Herein, we demonstrate that further increasing the size of the alkyl group on the secondary amine affords enhanced stability against diamine volatilization, but also leads to surprising two-step CO 2 adsorption/desorption profiles. This two-step behavior likely results from steric interactions between ammonium carbamate chains induced by the asymmetrical hexagonal pores of Mg 2 (dobpdc) and leads to decreased CO 2 working capacities and increased water co-adsorption under humid conditions. To minimize these unfavorable steric interactions, we targeted diamine-appended variants of the isoreticularly expanded framework Mg 2 (dotpdc) (dotpdc 4- = 4,4''-dioxido-[1,1':4',1''-terphenyl]-3,3''-dicarboxylate), reported here for the first time, and the previously reported isomeric framework Mg-IRMOF-74-II or Mg 2 (pc-dobpdc) (pc-dobpdc 4- = 3,3'-dioxidobiphenyl-4,4'-dicarboxylate, pc = para -carboxylate), which, in contrast to Mg 2 (dobpdc), possesses uniformally hexagonal pores. By minimizing the steric interactions between ammonium carbamate chains, these frameworks enable a single CO 2 adsorption/desorption step in all cases, as well as decreased water co-adsorption and increased stability to diamine loss. Functionalization of Mg 2 (pc-dobpdc) with large diamines such as N -( n -heptyl)ethylenediamine results in optimal adsorption behavior, highlighting the advantage of tuning both the pore shape and the diamine size for the development of

  20. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width.

    Science.gov (United States)

    Learn, R; Feigenbaum, E

    2016-06-01

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  1. Optimal placement of capacito

    Directory of Open Access Journals (Sweden)

    N. Gnanasekaran

    2016-06-01

    Full Text Available Optimal size and location of shunt capacitors in the distribution system plays a significant role in minimizing the energy loss and the cost of reactive power compensation. This paper presents a new efficient technique to find optimal size and location of shunt capacitors with the objective of minimizing cost due to energy loss and reactive power compensation of distribution system. A new Shark Smell Optimization (SSO algorithm is proposed to solve the optimal capacitor placement problem satisfying the operating constraints. The SSO algorithm is a recently developed metaheuristic optimization algorithm conceptualized using the shark’s hunting ability. It uses a momentum incorporated gradient search and a rotational movement based local search for optimization. To demonstrate the applicability of proposed method, it is tested on IEEE 34-bus and 118-bus radial distribution systems. The simulation results obtained are compared with previous methods reported in the literature and found to be encouraging.

  2. Set-Based Discrete Particle Swarm Optimization Based on Decomposition for Permutation-Based Multiobjective Combinatorial Optimization Problems.

    Science.gov (United States)

    Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun

    2017-08-07

    This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

  3. Fast exploration of an optimal path on the multidimensional free energy surface

    Science.gov (United States)

    Chen, Changjun

    2017-01-01

    In a reaction, determination of an optimal path with a high reaction rate (or a low free energy barrier) is important for the study of the reaction mechanism. This is a complicated problem that involves lots of degrees of freedom. For simple models, one can build an initial path in the collective variable space by the interpolation method first and then update the whole path constantly in the optimization. However, such interpolation method could be risky in the high dimensional space for large molecules. On the path, steric clashes between neighboring atoms could cause extremely high energy barriers and thus fail the optimization. Moreover, performing simulations for all the snapshots on the path is also time-consuming. In this paper, we build and optimize the path by a growing method on the free energy surface. The method grows a path from the reactant and extends its length in the collective variable space step by step. The growing direction is determined by both the free energy gradient at the end of the path and the direction vector pointing at the product. With fewer snapshots on the path, this strategy can let the path avoid the high energy states in the growing process and save the precious simulation time at each iteration step. Applications show that the presented method is efficient enough to produce optimal paths on either the two-dimensional or the twelve-dimensional free energy surfaces of different small molecules. PMID:28542475

  4. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    Science.gov (United States)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  5. Development of a fast optimization preview in radiation treatment planning

    International Nuclear Information System (INIS)

    Hoeffner, J.; Decker, P.; Schmidt, E.L.; Herbig, W.; Rittler, J.; Weiss, P.

    1996-01-01

    Usually, the speed of convergence of some iterative algorithms is restricted to a bounded relaxation parameter. Exploiting the special altering behavior of the weighting factors at each step, many iteration steps are avoided by overrelaxing this relaxation parameter. Therefore, the relaxation parameter is increased as long as the optimization result is improved. This can be performed without loss of accuracy. Our optimization technique is demonstrated by the case of a right lung carcinoma. The solution space for this case is 36 isocentric X-ray beams evenly spaced at 10 . Each beam is restricted to 23 MV X-ray fields with a planning target volume matched by irregular field shapes, similar to that produced by a multileaf collimator. Four organs at risk plus the planning target volume are considered in the optimization process. The convergence behavior of the optimization algorithm is shown by overrelaxing the relaxation parameter in comparison to conventional relaxation parameter control. The new approach offers the ability to get a fast preview of the expected final result. If the clinician is in agreement with the preview, the algorithm is continued and achieves the result proven by the Cimmino optimization algorithm. In the other case, if the clinician doesn't agree with the preview, he will be able to change the optimization parameters (e.g. field entry points) and to restart the algorithm. (orig./MG) [de

  6. Optimal configuration of microstructure in ferroelectric materials by stochastic optimization

    Science.gov (United States)

    Jayachandran, K. P.; Guedes, J. M.; Rodrigues, H. C.

    2010-07-01

    An optimization procedure determining the ideal configuration at the microstructural level of ferroelectric (FE) materials is applied to maximize piezoelectricity. Piezoelectricity in ceramic FEs differs significantly from that of single crystals because of the presence of crystallites (grains) possessing crystallographic axes aligned imperfectly. The piezoelectric properties of a polycrystalline (ceramic) FE is inextricably related to the grain orientation distribution (texture). The set of combination of variables, known as solution space, which dictates the texture of a ceramic is unlimited and hence the choice of the optimal solution which maximizes the piezoelectricity is complicated. Thus, a stochastic global optimization combined with homogenization is employed for the identification of the optimal granular configuration of the FE ceramic microstructure with optimum piezoelectric properties. The macroscopic equilibrium piezoelectric properties of polycrystalline FE is calculated using mathematical homogenization at each iteration step. The configuration of grains characterized by its orientations at each iteration is generated using a randomly selected set of orientation distribution parameters. The optimization procedure applied to the single crystalline phase compares well with the experimental data. Apparent enhancement of piezoelectric coefficient d33 is observed in an optimally oriented BaTiO3 single crystal. Based on the good agreement of results with the published data in single crystals, we proceed to apply the methodology in polycrystals. A configuration of crystallites, simultaneously constraining the orientation distribution of the c-axis (polar axis) while incorporating ab-plane randomness, which would multiply the overall piezoelectricity in ceramic BaTiO3 is also identified. The orientation distribution of the c-axes is found to be a narrow Gaussian distribution centered around 45°. The piezoelectric coefficient in such a ceramic is found to

  7. Advanced backend optimization

    CERN Document Server

    Touati, Sid

    2014-01-01

    This book is a summary of more than a decade of research in the area of backend optimization. It contains the latest fundamental research results in this field. While existing books are often more oriented toward Masters students, this book is aimed more towards professors and researchers as it contains more advanced subjects.It is unique in the sense that it contains information that has not previously been covered by other books in the field, with chapters on phase ordering in optimizing compilation; register saturation in instruction level parallelism; code size reduction for software pipe

  8. CT-Guided Percutaneous Step-by-Step Radiofrequency Ablation for the Treatment of Carcinoma in the Caudate Lobe

    Science.gov (United States)

    Dong, Jun; Li, Wang; Zeng, Qi; Li, Sheng; Gong, Xiao; Shen, Lujun; Mao, Siyue; Dong, Annan; Wu, Peihong

    2015-01-01

    Abstract The location of the caudate lobe and its complex anatomy make caudate lobectomy and radiofrequency ablation (RFA) under ultrasound guidance technically challenging. The objective of the exploratory study was to introduce a novel modality of treatment of lesions in caudate lobe and discuss all details with our experiences to make this novel treatment modality repeatable and educational. The study enrolled 39 patients with liver caudate lobe tumor first diagnosed by computerized tomography (CT) or magnetic resonance imaging (MRI). After consultation of multi-disciplinary team, 7 patients with hepatic caudate lobe lesions were enrolled and accepted CT-guided percutaneous step-by-step RFA treatment. A total of 8 caudate lobe lesions of the 7 patients were treated by RFA in 6 cases and RFA combined with percutaneous ethanol injection (PEI) in 1 case. Median tumor diameter was 29 mm (range, 18–69 mm). A right approach was selected for 6 patients and a dorsal approach for 1 patient. Median operative time was 64 min (range, 59–102 min). Median blood loss was 10 mL (range, 8-16 mL) and mainly due to puncture injury. Median hospitalization time was 4 days (range, 2–5 days). All lesions were completely ablated (8/8; 100%) and no recurrence at the site of previous RFA was observed during median 8 months follow-up (range 3–11 months). No major or life-threatening complications or deaths occurred. In conclusion, percutaneous step-by-step RFA under CT guidance is a novel and effective minimally invasive therapy for hepatic caudate lobe lesions with well repeatability. PMID:26426638

  9. Optimized low-order explicit Runge-Kutta schemes for high- order spectral difference method

    KAUST Repository

    Parsani, Matteo

    2012-01-01

    Optimal explicit Runge-Kutta (ERK) schemes with large stable step sizes are developed for method-of-lines discretizations based on the spectral difference (SD) spatial discretization on quadrilateral grids. These methods involve many stages and provide the optimal linearly stable time step for a prescribed SD spectrum and the minimum leading truncation error coefficient, while admitting a low-storage implementation. Using a large number of stages, the new ERK schemes lead to efficiency improvements larger than 60% over standard ERK schemes for 4th- and 5th-order spatial discretization.

  10. An Indirect Simulation-Optimization Model for Determining Optimal TMDL Allocation under Uncertainty

    Directory of Open Access Journals (Sweden)

    Feng Zhou

    2015-11-01

    Full Text Available An indirect simulation-optimization model framework with enhanced computational efficiency and risk-based decision-making capability was developed to determine optimal total maximum daily load (TMDL allocation under uncertainty. To convert the traditional direct simulation-optimization model into our indirect equivalent model framework, we proposed a two-step strategy: (1 application of interval regression equations derived by a Bayesian recursive regression tree (BRRT v2 algorithm, which approximates the original hydrodynamic and water-quality simulation models and accurately quantifies the inherent nonlinear relationship between nutrient load reductions and the credible interval of algal biomass with a given confidence interval; and (2 incorporation of the calibrated interval regression equations into an uncertain optimization framework, which is further converted to our indirect equivalent framework by the enhanced-interval linear programming (EILP method and provides approximate-optimal solutions at various risk levels. The proposed strategy was applied to the Swift Creek Reservoir’s nutrient TMDL allocation (Chesterfield County, VA to identify the minimum nutrient load allocations required from eight sub-watersheds to ensure compliance with user-specified chlorophyll criteria. Our results indicated that the BRRT-EILP model could identify critical sub-watersheds faster than the traditional one and requires lower reduction of nutrient loadings compared to traditional stochastic simulation and trial-and-error (TAE approaches. This suggests that our proposed framework performs better in optimal TMDL development compared to the traditional simulation-optimization models and provides extreme and non-extreme tradeoff analysis under uncertainty for risk-based decision making.

  11. Optimization in the decommissioning of uranium tailings

    International Nuclear Information System (INIS)

    1987-06-01

    This report examines in detail the problem of choosing the optimal decommissioning approach for uranium and mill tailings sites. Various decision methods are discussed and evaluated, and their application in similar decision problems are summarized. This report includes, by means of a demonstration, a step by step guide of how a number of selected techniques can be applied to a decommissioning problem. The strengths and weaknesses of various methods are highlighted. A decision system approach is recommended for its flexibility and incorporation of many of the strengths found in other decision methods

  12. Optimal Acceleration-Velocity-Bounded Trajectory Planning in Dynamic Crowd Simulation

    Directory of Open Access Journals (Sweden)

    Fu Yue-wen

    2014-01-01

    Full Text Available Creating complex and realistic crowd behaviors, such as pedestrian navigation behavior with dynamic obstacles, is a difficult and time consuming task. In this paper, we study one special type of crowd which is composed of urgent individuals, normal individuals, and normal groups. We use three steps to construct the crowd simulation in dynamic environment. The first one is that the urgent individuals move forward along a given path around dynamic obstacles and other crowd members. An optimal acceleration-velocity-bounded trajectory planning method is utilized to model their behaviors, which ensures that the durations of the generated trajectories are minimal and the urgent individuals are collision-free with dynamic obstacles (e.g., dynamic vehicles. In the second step, a pushing model is adopted to simulate the interactions between urgent members and normal ones, which ensures that the computational cost of the optimal trajectory planning is acceptable. The third step is obligated to imitate the interactions among normal members using collision avoidance behavior and flocking behavior. Various simulation results demonstrate that these three steps give realistic crowd phenomenon just like the real world.

  13. OPTIMAL CONTROL FOR ELECTRIC VEHICLE STABILIZATION

    Directory of Open Access Journals (Sweden)

    MARIAN GAICEANU

    2016-01-01

    Full Text Available This main objective of the paper is to stabilize an electric vehicle in optimal manner to a step lane change maneuver. To define the mathematical model of the vehicle, the rigid body moving on a plane is taken into account. An optimal lane keeping controller delivers the adequate angles in order to stabilize the vehicle’s trajectory in an optimal way. Two degree of freedom linear bicycle model is adopted as vehicle model, consisting of lateral and yaw motion equations. The proposed control maintains the lateral stability by taking the feedback information from the vehicle transducers. In this way only the lateral vehicle’s dynamics are enough to considerate. Based on the obtained linear mathematical model the quadratic optimal control is designed in order to maintain the lateral stability of the electric vehicle. The numerical simulation results demonstrate the feasibility of the proposed solution.

  14. Global Optimization of Damping Ring Designs Using a Multi-Objective Evolutionary Algorithm

    CERN Document Server

    Emery, Louis

    2005-01-01

    Several damping ring designs for the International Linear Collider have been proposed recently. Some of the specifications, such as circumference and bunch train, are not fixed yet. Designers must make a choice anyway, select a geometry type (dog-bone or circular), an arc cell type (TME or FODO), and optimize linear and nonlinear part of the optics. The design process include straightforward steps (usually the linear optics), and some steps not so straightforward (when nonlinear optics optimization is affected by the linear optics). A first attempt at automating this process for the linear optics is reported. We first recognize that the optics is defined by just a few primary parameters (e.g., phase advance per cell) that determine the rest (e.g., quadrupole strength). In addition to the exact specification of circumference, equilibrium emittance and damping time there are some other quantities which could be optimized that may conflict with each other. A multiobjective genetic optimizer solves this problem b...

  15. Delivering stepped care: an analysis of implementation in routine practice

    Directory of Open Access Journals (Sweden)

    Richards David A

    2012-01-01

    Full Text Available Abstract Background In the United Kingdom, clinical guidelines recommend that services for depression and anxiety should be structured around a stepped care model, where patients receive treatment at different 'steps,' with the intensity of treatment (i.e., the amount and type increasing at each step if they fail to benefit at previous steps. There are very limited data available on the implementation of this model, particularly on the intensity of psychological treatment at each step. Our objective was to describe patient pathways through stepped care services and the impact of this on patient flow and management. Methods We recorded service design features of four National Health Service sites implementing stepped care (e.g., the types of treatments available and their links with other treatments, together with the actual treatments received by individual patients and their transitions between different treatment steps. We computed the proportions of patients accessing, receiving, and transiting between the various steps and mapped these proportions visually to illustrate patient movement. Results We collected throughput data on 7,698 patients referred. Patient pathways were highly complex and very variable within and between sites. The ratio of low (e.g., self-help to high-intensity (e.g., cognitive behaviour therapy treatments delivered varied between sites from 22:1, through 2.1:1, 1.4:1 to 0.5:1. The numbers of patients allocated directly to high-intensity treatment varied from 3% to 45%. Rates of stepping up from low-intensity treatment to high-intensity treatment were less than 10%. Conclusions When services attempt to implement the recommendation for stepped care in the National Institute for Health and Clinical Excellence guidelines, there were significant differences in implementation and consequent high levels of variation in patient pathways. Evaluations driven by the principles of implementation science (such as targeted planning

  16. A single-step method for rapid extraction of total lipids from green microalgae.

    Directory of Open Access Journals (Sweden)

    Martin Axelsson

    Full Text Available Microalgae produce a wide range of lipid compounds of potential commercial interest. Total lipid extraction performed by conventional extraction methods, relying on the chloroform-methanol solvent system are too laborious and time consuming for screening large numbers of samples. In this study, three previous extraction methods devised by Folch et al. (1957, Bligh and Dyer (1959 and Selstam and Öquist (1985 were compared and a faster single-step procedure was developed for extraction of total lipids from green microalgae. In the single-step procedure, 8 ml of a 2∶1 chloroform-methanol (v/v mixture was added to fresh or frozen microalgal paste or pulverized dry algal biomass contained in a glass centrifuge tube. The biomass was manually suspended by vigorously shaking the tube for a few seconds and 2 ml of a 0.73% NaCl water solution was added. Phase separation was facilitated by 2 min of centrifugation at 350 g and the lower phase was recovered for analysis. An uncharacterized microalgal polyculture and the green microalgae Scenedesmus dimorphus, Selenastrum minutum, and Chlorella protothecoides were subjected to the different extraction methods and various techniques of biomass homogenization. The less labour intensive single-step procedure presented here allowed simultaneous recovery of total lipid extracts from multiple samples of green microalgae with quantitative yields and fatty acid profiles comparable to those of the previous methods. While the single-step procedure is highly correlated in lipid extractability (r² = 0.985 to the previous method of Folch et al. (1957, it allowed at least five times higher sample throughput.

  17. Optimization of observation plan based on the stochastic characteristics of the geodetic network

    Directory of Open Access Journals (Sweden)

    Pachelski Wojciech

    2016-06-01

    Full Text Available Optimal design of geodetic network is a basic subject of many engineering projects. An observation plan is a concluding part of the process. Any particular observation within the network has through adjustment a different contribution and impact on values and accuracy characteristics of unknowns. The problem of optimal design can be solved by means of computer simulation. This paper presents a new method of simulation based on sequential estimation of individual observations in a step-by-step manner, by means of the so-called filtering equations. The algorithm aims at satisfying different criteria of accuracy according to various interpretations of the covariance matrix. Apart of them, the optimization criterion is also amount of effort, defined as the minimum number of observations required.

  18. Topology optimization for optical microlithography with partially coherent illumination

    DEFF Research Database (Denmark)

    Zhou, Mingdong; Lazarov, Boyan Stefanov; Sigmund, Ole

    2017-01-01

    in microlithography/nanolithography. The key steps include (i) modeling the physical inputs of the fabrication process, including the ultraviolet light illumination source and the mask, as the design variables in optimization and (ii) applying physical filtering and heaviside projection for topology optimization......This article revisits a topology optimization design approach for micro-manufacturing and extends it to optical microlithography with partially coherent illumination. The solution is based on a combination of two technologies, the topology optimization and the proximity error correction....... Meanwhile, the performance of the device is optimized and robust with respect to process variations, such as dose/photo-resist variations and lens defocus. A compliant micro-gripper design example is considered to demonstrate the applicability of this approach....

  19. High-resolution wave-theory-based ultrasound reflection imaging using the split-step fourier and globally optimized fourier finite-difference methods

    Science.gov (United States)

    Huang, Lianjie

    2013-10-29

    Methods for enhancing ultrasonic reflection imaging are taught utilizing a split-step Fourier propagator in which the reconstruction is based on recursive inward continuation of ultrasonic wavefields in the frequency-space and frequency-wave number domains. The inward continuation within each extrapolation interval consists of two steps. In the first step, a phase-shift term is applied to the data in the frequency-wave number domain for propagation in a reference medium. The second step consists of applying another phase-shift term to data in the frequency-space domain to approximately compensate for ultrasonic scattering effects of heterogeneities within the tissue being imaged (e.g., breast tissue). Results from various data input to the method indicate significant improvements are provided in both image quality and resolution.

  20. Increase of Gas-Turbine Plant Efficiency by Optimizing Operation of Compressors

    Science.gov (United States)

    Matveev, V.; Goriachkin, E.; Volkov, A.

    2018-01-01

    The article presents optimization method for improving of the working process of axial compressors of gas turbine engines. Developed method allows to perform search for the best geometry of compressor blades automatically by using optimization software IOSO and CFD software NUMECA Fine/Turbo. The calculation of the compressor parameters was performed for work and stall point of its performance map on each optimization step. Study was carried out for seven-stage high-pressure compressor and three-stage low-pressure compressors. As a result of optimization, improvement of efficiency was achieved for all investigated compressors.

  1. Standardization and optimization of arthropod inventories-the case of Iberian spiders

    DEFF Research Database (Denmark)

    Bondoso Cardoso, Pedro Miguel

    2009-01-01

    and optimization of sampling protocols, especially for mega-diverse arthropod taxa. This study had two objectives: (1) propose guidelines and statistical methods to improve the standardization and optimization of arthropod inventories, and (2) to propose a standardized and optimized protocol for Iberian spiders......, by finding common results between the optimal options for the different sites. The steps listed were successfully followed in the determination of a sampling protocol for Iberian spiders. A protocol with three sub-protocols of varying degrees of effort (24, 96 and 320 h of sampling) is proposed. I also...

  2. Melanin fluorescence spectra by step-wise three photon excitation

    Science.gov (United States)

    Lai, Zhenhua; Kerimo, Josef; DiMarzio, Charles A.

    2012-03-01

    Melanin is the characteristic chromophore of human skin with various potential biological functions. Kerimo discovered enhanced melanin fluorescence by stepwise three-photon excitation in 2011. In this article, step-wise three-photon excited fluorescence (STPEF) spectrum between 450 nm -700 nm of melanin is reported. The melanin STPEF spectrum exhibited an exponential increase with wavelength. However, there was a probability of about 33% that another kind of step-wise multi-photon excited fluorescence (SMPEF) that peaks at 525 nm, shown by previous research, could also be generated using the same process. Using an excitation source at 920 nm as opposed to 830 nm increased the potential for generating SMPEF peaks at 525 nm. The SMPEF spectrum peaks at 525 nm photo-bleached faster than STPEF spectrum.

  3. Adiabatic tapered optical fiber fabrication in two step etching

    Science.gov (United States)

    Chenari, Z.; Latifi, H.; Ghamari, S.; Hashemi, R. S.; Doroodmand, F.

    2016-01-01

    A two-step etching method using HF acid and Buffered HF is proposed to fabricate adiabatic biconical optical fiber tapers. Due to the fact that the etching rate in second step is almost 3 times slower than the previous droplet etching method, terminating the fabrication process is controllable enough to achieve a desirable fiber diameter. By monitoring transmitted spectrum, final diameter and adiabaticity of tapers are deduced. Tapers with losses about 0.3 dB in air and 4.2 dB in water are produced. The biconical fiber taper fabricated using this method is used to excite whispering gallery modes (WGMs) on a microsphere surface in an aquatic environment. So that they are suitable to be used in applications like WGM biosensors.

  4. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé

    2013-02-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.

  5. Daily step count predicts acute exacerbations in a US cohort with COPD.

    Science.gov (United States)

    Moy, Marilyn L; Teylan, Merilee; Weston, Nicole A; Gagnon, David R; Garshick, Eric

    2013-01-01

    COPD is characterized by variability in exercise capacity and physical activity (PA), and acute exacerbations (AEs). Little is known about the relationship between daily step count, a direct measure of PA, and the risk of AEs, including hospitalizations. In an observational cohort study of 169 persons with COPD, we directly assessed PA with the StepWatch Activity Monitor, an ankle-worn accelerometer that measures daily step count. We also assessed exercise capacity with the 6-minute walk test (6MWT) and patient-reported PA with the St. George's Respiratory Questionnaire Activity Score (SGRQ-AS). AEs and COPD-related hospitalizations were assessed and validated prospectively over a median of 16 months. Mean daily step count was 5804±3141 steps. Over 209 person-years of observation, there were 263 AEs (incidence rate 1.3±1.6 per person-year) and 116 COPD-related hospitalizations (incidence rate 0.56±1.09 per person-year). Adjusting for FEV1 % predicted and prednisone use for AE in previous year, for each 1000 fewer steps per day walked at baseline, there was an increased rate of AEs (rate ratio 1.07; 95%CI = 1.003-1.15) and COPD-related hospitalizations (rate ratio 1.24; 95%CI = 1.08-1.42). There was a significant linear trend of decreasing daily step count by quartiles and increasing rate ratios for AEs (P = 0.008) and COPD-related hospitalizations (P = 0.003). Each 30-meter decrease in 6MWT distance was associated with an increased rate ratio of 1.07 (95%CI = 1.01-1.14) for AEs and 1.18 (95%CI = 1.07-1.30) for COPD-related hospitalizations. Worsening of SGRQ-AS by 4 points was associated with an increased rate ratio of 1.05 (95%CI = 1.01-1.09) for AEs and 1.10 (95%CI = 1.02-1.17) for COPD-related hospitalizations. Lower daily step count, lower 6MWT distance, and worse SGRQ-AS predict future AEs and COPD-related hospitalizations, independent of pulmonary function and previous AE history. These results support the importance of

  6. Exploring the ‘ultimate’ step in the mediatization of political parties

    DEFF Research Database (Denmark)

    Ørsten, Mark; Willig, Ida; Pedersen, Leif Hemming

    Previous studies exploring the fourth dimension of mediatization, that is to say the extent to which political parties adjust their perceptions and behaviour to news media logic, have focused on three steps of structural change that political parties may take as a response to the increasing......, parties may begin to especially focus on candidates who are skilled in communicating with the news media (Strömbäck & Van Alest, 2013).So far, research into the fourth dimension of mediatization in Denmark has focused mostly on the first two steps (Blach-Ørsten, 2016; Elmelund-Præstekær & Hopmann, 2016...

  7. Strong Stability Preserving Two-step Runge–Kutta Methods

    KAUST Repository

    Ketcheson, David I.; Gottlieb, Sigal; Macdonald, Colin B.

    2011-01-01

    We investigate the strong stability preserving (SSP) property of two-step Runge–Kutta (TSRK) methods. We prove that all SSP TSRK methods belong to a particularly simple subclass of TSRK methods, in which stages from the previous step are not used. We derive simple order conditions for this subclass. Whereas explicit SSP Runge–Kutta methods have order at most four, we prove that explicit SSP TSRK methods have order at most eight. We present explicit TSRK methods of up to eighth order that were found by numerical search. These methods have larger SSP coefficients than any known methods of the same order of accuracy and may be implemented in a form with relatively modest storage requirements. The usefulness of the TSRK methods is demonstrated through numerical examples, including integration of very high order weighted essentially non-oscillatory discretizations.

  8. Strong Stability Preserving Two-step Runge–Kutta Methods

    KAUST Repository

    Ketcheson, David I.

    2011-12-22

    We investigate the strong stability preserving (SSP) property of two-step Runge–Kutta (TSRK) methods. We prove that all SSP TSRK methods belong to a particularly simple subclass of TSRK methods, in which stages from the previous step are not used. We derive simple order conditions for this subclass. Whereas explicit SSP Runge–Kutta methods have order at most four, we prove that explicit SSP TSRK methods have order at most eight. We present explicit TSRK methods of up to eighth order that were found by numerical search. These methods have larger SSP coefficients than any known methods of the same order of accuracy and may be implemented in a form with relatively modest storage requirements. The usefulness of the TSRK methods is demonstrated through numerical examples, including integration of very high order weighted essentially non-oscillatory discretizations.

  9. Codon optimizing for increased membrane protein production

    DEFF Research Database (Denmark)

    Mirzadeh, K.; Toddo, S.; Nørholm, Morten

    2016-01-01

    . As demonstrated with two membrane-embedded transporters in Escherichia coli, the method was more effective than optimizing the entire coding sequence. The method we present is PCR based and requires three simple steps: (1) the design of two PCR primers, one of which is degenerate; (2) the amplification...

  10. Stepping reaction time and gait adaptability are significantly impaired in people with Parkinson's disease: Implications for fall risk.

    Science.gov (United States)

    Caetano, Maria Joana D; Lord, Stephen R; Allen, Natalie E; Brodie, Matthew A; Song, Jooeun; Paul, Serene S; Canning, Colleen G; Menant, Jasmine C

    2018-02-01

    Decline in the ability to take effective steps and to adapt gait, particularly under challenging conditions, may be important reasons why people with Parkinson's disease (PD) have an increased risk of falling. This study aimed to determine the extent of stepping and gait adaptability impairments in PD individuals as well as their associations with PD symptoms, cognitive function and previous falls. Thirty-three older people with PD and 33 controls were assessed in choice stepping reaction time, Stroop stepping and gait adaptability tests; measurements identified as fall risk factors in older adults. People with PD had similar mean choice stepping reaction times to healthy controls, but had significantly greater intra-individual variability. In the Stroop stepping test, the PD participants were more likely to make an error (48 vs 18%), took 715 ms longer to react (2312 vs 1517 ms) and had significantly greater response variability (536 vs 329 ms) than the healthy controls. People with PD also had more difficulties adapting their gait in response to targets (poorer stepping accuracy) and obstacles (increased number of steps) appearing at short notice on a walkway. Within the PD group, higher disease severity, reduced cognition and previous falls were associated with poorer stepping and gait adaptability performances. People with PD have reduced ability to adapt gait to unexpected targets and obstacles and exhibit poorer stepping responses, particularly in a test condition involving conflict resolution. Such impaired stepping responses in Parkinson's disease are associated with disease severity, cognitive impairment and falls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. A step-by-step experiment of 3C-SiC hetero-epitaxial growth on 4H-SiC by CVD

    Energy Technology Data Exchange (ETDEWEB)

    Xin, Bin [School of Microelectronics, Xidian University, Key Laboratory of Wide Band-Gap Semiconductor Materials and Devices, Xi’an 710071 (China); Jia, Ren-Xu, E-mail: rxjia@mail.xidian.edu.cn [School of Microelectronics, Xidian University, Key Laboratory of Wide Band-Gap Semiconductor Materials and Devices, Xi’an 710071 (China); Hu, Ji-Chao [School of Microelectronics, Xidian University, Key Laboratory of Wide Band-Gap Semiconductor Materials and Devices, Xi’an 710071 (China); Tsai, Cheng-Ying [Graduate Institute of Electronics Engineering, National Taiwan University, 10617 Taipei, Taiwan (China); Lin, Hao-Hsiung, E-mail: hhlin@ntu.edu.tw [Graduate Institute of Electronics Engineering, National Taiwan University, 10617 Taipei, Taiwan (China); Graduate Institute of Photonics and Optoelectronics, National Taiwan University, 10617 Taipei, Taiwan (China); Zhang, Yu-Ming [School of Microelectronics, Xidian University, Key Laboratory of Wide Band-Gap Semiconductor Materials and Devices, Xi’an 710071 (China)

    2015-12-01

    Highlights: • A step-by-step experiment to investigate the growth mechanism of SiC hetero-epitaxial is proposed. • It has shown protrusive regular “hill” morphology with much lower density of DPB defect in our experiment, which normally were in high density with shallow groove. Based on the defect morphology, an anisotropy migration rate phenomenon of adatoms has been regarded as forming the morphology of DPB defects and a new “DPB defects assist epitaxy” growth mode has been proposed based on Frank-van der Merwe growth mode. - Abstract: To investigate the growth mechanism of hetero-epitaxial SiC, a step-by-step experiment of 3C-SiC epitaxial layers grown on 4H-SiC on-axis substrates by the CVD method are reported in this paper. Four step experiments with four one-quarter 4H-SiC wafers were performed. Optical microscopy and atomic force microscopy (AFM) were used to characterize the morphology of the epitaxial layers. It was previously found that the main factor affecting the epilayer morphology was double-positioning boundary (DPB) defects, which normally were in high density with shallow grooves. However, a protrusive regular “hill” morphology with a much lower density was shown in our experiment in high-temperature growth conditions. The anisotropic migration of adatoms is regarded as forming the morphology of DPB defects, and a new “DPB defects assist epitaxy” growth mode has been proposed based on the Frank-van der Merwe growth mode. Raman spectroscopy and X-ray diffraction were used to examine the polytypes and the quality of the epitaxial layers.

  12. Dynamic optimization of the complex adaptive controlling by the structure of enterprise’s product range

    Directory of Open Access Journals (Sweden)

    Andrey Fyodorovich Shorikov

    2013-06-01

    Full Text Available This paper reviews a methodical approach to solve multi-step dynamic problem of optimal integrated adaptive management of a product portfolio structure of the enterprise. For the organization of optimal adaptive terminal control of the system the recurrent algorithm, which reduces an initial multistage problem to the realization of the final sequence of problems of optimal program terminal control is offered. In turn, the decision of each problem of optimal program terminal control is reduced to the realization of the final sequence only single-step operations in the form of the problems solving of linear and convex mathematical programming. Thus, the offered approach allows to develop management solutions at current information support, which consider feedback, and which create the optimal structure of an enterprise’s product lines, contributing to optimising of profits, as well as maintenance of the desired level of profit for a long period of time

  13. 2-Step scalar deadzone quantization for bitplane image coding.

    Science.gov (United States)

    Auli-Llinas, Francesc

    2013-12-01

    Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.

  14. A Four-Feet Walking-Type Rotary Piezoelectric Actuator with Minute Step Motion.

    Science.gov (United States)

    Liu, Yingxiang; Wang, Yun; Liu, Junkao; Xu, Dongmei; Li, Kai; Shan, Xiaobiao; Deng, Jie

    2018-05-08

    A four-feet walking-type rotary piezoelectric actuator with minute step motion was proposed. The proposed actuator used the rectangular motions of four driving feet to push the rotor step-by-step; this operating principle was different with the previous non-resonant actuators using direct-driving, inertial-driving, and inchworm-type mechanisms. The mechanism of the proposed actuator was discussed in detail. Transient analyses were accomplished by ANSYS software to simulate the motion trajectory of the driving foot and to find the response characteristics. A prototype was manufactured to verify the mechanism and to test the mechanical characteristics. A minimum resolution of 0.095 μrad and a maximum torque of 49 N·mm were achieved by the prototype, and the output speed was varied by changing the driving voltage and working frequency. This work provides a new mechanism for the design of a rotary piezoelectric actuator with minute step motion.

  15. Optimization method of rod-type burnable poisons for nuclear designs of HTGRs

    International Nuclear Information System (INIS)

    Yamashita, Kiyonobu

    1994-01-01

    In block-type HTGRs, control rod insertion depths into cores had to be maintained as small as possible at full power operations, to avoid a fuel temperature rise. Thus, specifications (poison atom density (N BP ) and radius (r)) of rod-type burnable poisons (BPs) had to be optimized so that the effective multiplication factor (k eff ) would be constant at a minimum value throughout a planned burnup period. However, the optimization had been a time-consuming work until now since survey calculations had to be done for most possible combinations of N BP and r. To solve this problem, I have found a optimization method consisting of two steps. In the first step, approximation formulas describing a time-dependent relation among effective absorption cross sections (Σ aBP ), N BP and r are used to select promising combinations of N BP and r beforehand. In the second step, the best combination of N BP and r is determined by a comparison between Σ aBP of each promising combination and expected one. The number of survey calculations was reduced to about 1/10 by the optimization method. The change in k eff for 600 burnup days was reduced to 2%Δk by the method. Hence, it was made possible to operate reactors practically without inserting the control rods into cores. (author)

  16. The Value of Step-by-Step Risk Assessment for Unmanned Aircraft

    DEFF Research Database (Denmark)

    La Cour-Harbo, Anders

    2018-01-01

    The new European legislation expected in 2018 or 2019 will introduce a step-by-step process for conducting risk assessments for unmanned aircraft flight operations. This is a relatively simple approach to a very complex challenge. This work compares this step-by-step process to high fidelity risk...... modeling, and shows that at least for a series of example flight missions there is reasonable agreement between the two very different methods....

  17. Correspondence optimization in 2D standardized carotid wall thickness map by description length minimization: A tool for increasing reproducibility of 3D ultrasound-based measurements.

    Science.gov (United States)

    Chen, Yimin; Chiu, Bernard

    2016-12-01

    The previously described 2D standardized vessel-wall-plus-plaque thickness (VWT) maps constructed from 3D ultrasound vessel wall measurements using an arc-length (AL) scaling approach adjusted the geometric variability of carotid arteries and has allowed for the comparisons of VWT distributions in longitudinal and cross-sectional studies. However, this mapping technique did not optimize point correspondence of the carotid arteries investigated. The potential misalignment may lead to errors in point-wise VWT comparisons. In this paper, we developed and validated an algorithm based on steepest description length (DL) descent to optimize the point correspondence implied by the 2D VWT maps. The previously described AL approach was applied to obtain initial 2D maps for a group of carotid arteries. The 2D maps were reparameterized based on an iterative steepest DL descent approach, which consists of the following two steps. First, landmarks established by resampling the 2D maps were aligned using the Procrustes algorithm. Then, the gradient of the DL with respect to horizontal and vertical reparameterizations of each landmark on the 2D maps was computed, and the 2D maps were subsequently deformed in the direction of the steepest descent of DL. These two steps were repeated until convergence. The quality of the correspondence was evaluated in a phantom study and an in vivo study involving ten carotid arteries enrolled in a 3D ultrasound interscan variability study. The correspondence quality was evaluated in terms of the compactness and generalization ability of the statistical shape model built based on the established point correspondence in both studies. In the in vivo study, the effect of the proposed algorithm on interscan variability of VWT measurements was evaluated by comparing the percentage of landmarks with statistically significant VWT-change before and after point correspondence optimization. The statistical shape model constructed with optimized

  18. Optimal stability polynomials for numerical integration of initial value problems

    KAUST Repository

    Ketcheson, David I.

    2013-01-08

    We consider the problem of finding optimally stable polynomial approximations to the exponential for application to one-step integration of initial value ordinary and partial differential equations. The objective is to find the largest stable step size and corresponding method for a given problem when the spectrum of the initial value problem is known. The problem is expressed in terms of a general least deviation feasibility problem. Its solution is obtained by a new fast, accurate, and robust algorithm based on convex optimization techniques. Global convergence of the algorithm is proven in the case that the order of approximation is one and in the case that the spectrum encloses a starlike region. Examples demonstrate the effectiveness of the proposed algorithm even when these conditions are not satisfied.

  19. Stepped frequency imaging for flaw monitoring: Final report

    International Nuclear Information System (INIS)

    Hildebrand, B.P.

    1988-09-01

    This report summarizes the results of research into the usefulness of stepped frequency imaging (SFI) to nuclear power plant inspection. SFI is a method for producing ultrasonic holographic images without the need to sweep a two-dimensional aperture with the transducer. Instead, the transducer may be translated along a line. At each position of the transducer the frequency is stepped over a finite preselected bandwidth. The frequency stepped data is then processed to synthesize the second dimension. In this way it is possible to generate images in regions that are relatively inaccessible to two-dimensional scanners. This report reviews the theory and experimental work verifying the technique, and then explores its possible applications in the nuclear power industry. It also outlines how this new capability can be incorporated into the SDL-1000 Imaging System previously developed for EPRI. The report concludes with five suggestions for uses for the SFI method. These are: monitoring suspect or repaired regions of feedwater nozzles; monitoring pipe cracks repaired by weld overlay; monitoring crack depth during test block production; imaging flaws where access is difficult; and imaging flaws through cladding without distortion

  20. Kinetic Energy Dissipation on Labyrinth Configuration Stepped Spillway

    Directory of Open Access Journals (Sweden)

    Jaafar S. Maatooq

    2017-12-01

    Full Text Available In present work a labyrinth (zigzag, in shape has been used to configure the steps of stepped spillway by using the physical model. This configuration does not introduce previously by investigators or in construction techniques of dams or cascades. It would be expected to improve the flow over chute. A magnifying the width path of each step to become, LT, instead of, W, will induce the interlocking between the mainstream and that spread laterally due to labyrinth path. This phenomenon leads to reduce the jet velocities near the surfaces, thus minimizing the ability of cavitation and with increasing a circulation regions the ability of air entrainment be maximized. The results were encouraging, (e.g., the reverse performance has recorded for spillway slope. From the evaluation of outcome, the average recorded of percentage profits of kinetic energy dissipation with a labyrinth shape compared with the results of traditional shape were ranged between (13- 44%. Different predictive formulas have been proposed based on iteration analysis, can be recommended for evaluation and design.

  1. Beam angle optimization for intensity-modulated radiation therapy using a guided pattern search method

    International Nuclear Information System (INIS)

    Rocha, Humberto; Dias, Joana M; Ferreira, Brígida C; Lopes, Maria C

    2013-01-01

    Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem. (paper)

  2. Optimization of power system operation

    CERN Document Server

    Zhu, Jizhong

    2015-01-01

    This book applies the latest applications of new technologies topower system operation and analysis, including new and importantareas that are not covered in the previous edition. Optimization of Power System Operation covers both traditional andmodern technologies, including power flow analysis, steady-statesecurity region analysis, security constrained economic dispatch,multi-area system economic dispatch, unit commitment, optimal powerflow, smart grid operation, optimal load shed, optimalreconfiguration of distribution network, power system uncertaintyanalysis, power system sensitivity analysis, analytic hierarchicalprocess, neural network, fuzzy theory, genetic algorithm,evolutionary programming, and particle swarm optimization, amongothers. New topics such as the wheeling model, multi-areawheeling, the total transfer capability computation in multipleareas, are also addressed. The new edition of this book continues to provide engineers andac demics with a complete picture of the optimization of techn...

  3. Transportation package design using numerical optimization

    International Nuclear Information System (INIS)

    Harding, D.C.; Witkowski, W.R.

    1991-01-01

    The purpose of this overview is twofold: first, to outline the theory and basic elements of numerical optimization; and second, to show how numerical optimization can be applied to the transportation packaging industry and used to increase efficiency and safety of radioactive and hazardous material transportation packages. A more extensive review of numerical optimization and its applications to radioactive material transportation package design was performed previously by the authors (Witkowski and Harding 1992). A proof-of-concept Type B package design is also presented as a simplified example of potential improvements achievable using numerical optimization in the design process

  4. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    Science.gov (United States)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  5. Fluorecence modulated radiotherapy with integrated segmentation to optimization

    International Nuclear Information System (INIS)

    Baer, W.; Alber, M.; Nuesslin, F.

    2003-01-01

    On the basis of two clinical cases, we present fluence-modulated radiotherapy with a sequencer integrated into the optimization of our treatment-planning software HYPERION. In each case, we achieved simple relations for the dependence of the total number of segments on the complexity of the sequencing, as well as for the dependence of the dose-distribution quality on the number of segments. For both clinical cases, it was possible to obtain treatment plans that complied with the clinical demands on dose distribution and number of segments. Also, compared to the widespread concept of equidistant steps, our method of sequencing with fluence steps of variable size led to a significant reduction of the number of segments, while maintaining the quality of the dose distribution. Our findings substantiate the value of the integration of the sequencer into the optimization for the clinical efficiency of IMRT [de

  6. Arousal and exposure duration affect forward step initiation

    Directory of Open Access Journals (Sweden)

    Daniëlle eBouman

    2015-11-01

    Full Text Available Emotion influences parameters of goal-directed whole-body movements in several ways. For instance, previous research has shown that approaching (moving toward pleasant stimuli is easier compared to approaching unpleasant stimuli. However, some studies found that when emotional pictures are viewed for a longer time, approaching unpleasant stimuli may in fact be facilitated. The effect of viewing duration may modulate whole-body approach movement in previous research but this has not been investigated before. In the current study, participants initiated a step forward after viewing neutral, high-arousal pleasant and high-arousal unpleasant stimuli. The viewing duration of the stimuli was set to 7 different durations, varying from 100 to 4000ms. Valence and arousal scores were collected for all stimuli.The results indicate that both viewing duration and the arousal of the stimuli influence kinematic parameters in forward gait initiation. Specifically, longer viewing duration, compared to shorter viewing duration, (a diminished the step length and peak velocity in both neutral and emotional stimuli, (b increased reaction time in neutral stimuli and, (c decreased reaction time in pleasant and unpleasant stimuli. Strikingly, no differences were found between high-arousal pleasant and high-arousal unpleasant stimuli. In other words, the valence of the stimuli did not influence kinematic parameters of forward step initiation. In contrast, the arousal level (neutral: low; pleasant and unpleasant: high explained the variance found in the results. The kinematics of forward gait initiation seemed to be reflected in the subjective arousal scores, but not the valence scores. So it seems arousal affects forward gait initiation parameters more strongly than valence. In addition, longer viewing duration seemed to cause diminished alertness, affecting GI parameters. These results shed new light on the prevailing theoretical interpretations regarding approach

  7. Enforcing the Courant-Friedrichs-Lewy condition in explicitly conservative local time stepping schemes

    Science.gov (United States)

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-04-01

    An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.

  8. Two-speed phacoemulsification for soft cataracts using optimized parameters and procedure step toolbar with the CENTURION Vision System and Balanced Tip

    Directory of Open Access Journals (Sweden)

    Davison JA

    2015-08-01

    Full Text Available James A DavisonWolfe Eye Clinic, Marshalltown, IA, USA Purpose: To present a cause of posterior capsule aspiration and a technique using optimized parameters to prevent it from happening when operating soft cataracts.Patients and methods: A prospective list of posterior capsule aspiration cases was kept over 4,062 consecutive cases operated with the Alcon CENTURION machine and Balanced Tip. Video analysis of one case of posterior capsule aspiration was accomplished. A surgical technique was developed using empirically derived machine parameters and customized setting-selection procedure step toolbar to reduce the pace of aspiration of soft nuclear quadrants in order to prevent capsule aspiration.Results: Two cases out of 3,238 experienced posterior capsule aspiration before use of the soft quadrant technique. Video analysis showed an attractive vortex effect with capsule aspiration occurring in 1/5 of a second. A soft quadrant removal setting was empirically derived which had a slower pace and seemed more controlled with no capsule aspiration occurring in the subsequent 824 cases. The setting featured simultaneous linear control from zero to preset maximums for: aspiration flow, 20 mL/min; and vacuum, 400 mmHg, with the addition of torsional tip amplitude up to 20% after the fluidic maximums were achieved. A new setting selection procedure step toolbar was created to increase intraoperative flexibility by providing instantaneous shifting between the soft and normal settings.Conclusion: A technique incorporating a reduced pace for soft quadrant acquisition and aspiration can be accomplished through the use of a dedicated setting of integrated machine parameters. Toolbar placement of the procedure button next to the normal setting procedure button provides the opportunity to instantaneously alternate between the two settings. Simultaneous surgeon control over vacuum, aspiration flow, and torsional tip motion may make removal of soft nuclear

  9. Variable Neighborhood Search for Parallel Machines Scheduling Problem with Step Deteriorating Jobs

    Directory of Open Access Journals (Sweden)

    Wenming Cheng

    2012-01-01

    Full Text Available In many real scheduling environments, a job processed later needs longer time than the same job when it starts earlier. This phenomenon is known as scheduling with deteriorating jobs to many industrial applications. In this paper, we study a scheduling problem of minimizing the total completion time on identical parallel machines where the processing time of a job is a step function of its starting time and a deteriorating date that is individual to all jobs. Firstly, a mixed integer programming model is presented for the problem. And then, a modified weight-combination search algorithm and a variable neighborhood search are employed to yield optimal or near-optimal schedule. To evaluate the performance of the proposed algorithms, computational experiments are performed on randomly generated test instances. Finally, computational results show that the proposed approaches obtain near-optimal solutions in a reasonable computational time even for large-sized problems.

  10. Improved protein kinase C affinity through final step diversification of a simplified salicylate-derived bryostatin analog scaffold.

    Science.gov (United States)

    Wender, Paul A; Staveness, Daryl

    2014-10-03

    Bryostatin 1, in clinical trials or preclinical development for cancer, Alzheimer's disease, and a first-of-its-kind strategy for HIV/AIDS eradication, is neither readily available nor optimally suited for clinical use. In preceding work, we disclosed a new class of simplified bryostatin analogs designed for ease of access and tunable activity. Here we describe a final step diversification strategy that provides, in only 25 synthetic steps, simplified and tunable analogs with bryostatin-like PKC modulatory activities.

  11. Learning optimal embedded cascades.

    Science.gov (United States)

    Saberian, Mohammad Javad; Vasconcelos, Nuno

    2012-10-01

    The problem of automatic and optimal design of embedded object detector cascades is considered. Two main challenges are identified: optimization of the cascade configuration and optimization of individual cascade stages, so as to achieve the best tradeoff between classification accuracy and speed, under a detection rate constraint. Two novel boosting algorithms are proposed to address these problems. The first, RCBoost, formulates boosting as a constrained optimization problem which is solved with a barrier penalty method. The constraint is the target detection rate, which is met at all iterations of the boosting process. This enables the design of embedded cascades of known configuration without extensive cross validation or heuristics. The second, ECBoost, searches over cascade configurations to achieve the optimal tradeoff between classification risk and speed. The two algorithms are combined into an overall boosting procedure, RCECBoost, which optimizes both the cascade configuration and its stages under a detection rate constraint, in a fully automated manner. Extensive experiments in face, car, pedestrian, and panda detection show that the resulting detectors achieve an accuracy versus speed tradeoff superior to those of previous methods.

  12. Reduced order modeling and parameter identification of a building energy system model through an optimization routine

    International Nuclear Information System (INIS)

    Harish, V.S.K.V.; Kumar, Arun

    2016-01-01

    Highlights: • A BES model based on 1st principles is developed and solved numerically. • Parameters of lumped capacitance model are fitted using the proposed optimization routine. • Validations are showed for different types of building construction elements. • Step response excitations for outdoor air temperature and relative humidity are analyzed. - Abstract: Different control techniques together with intelligent building technology (Building Automation Systems) are used to improve energy efficiency of buildings. In almost all control projects, it is crucial to have building energy models with high computational efficiency in order to design and tune the controllers and simulate their performance. In this paper, a set of partial differential equations are formulated accounting for energy flow within the building space. These equations are then solved as conventional finite difference equations using Crank–Nicholson scheme. Such a model of a higher order is regarded as a benchmark model. An optimization algorithm has been developed, depicted through a flowchart, which minimizes the sum squared error between the step responses of the numerical and the optimal model. Optimal model of the construction element is nothing but a RC-network model with the values of Rs and Cs estimated using the non-linear time invariant constrained optimization routine. The model is validated with comparing the step responses with other two RC-network models whose parameter values are selected based on a certain criteria. Validations are showed for different types of building construction elements viz., low, medium and heavy thermal capacity elements. Simulation results show that the optimal model closely follow the step responses of the numerical model as compared to the responses of other two models.

  13. From 1 Sun to 10 Suns c-Si Cells by Optimizing Metal Grid, Metal Resistance, and Junction Depth

    International Nuclear Information System (INIS)

    Chaudhari, V.A.; Solanki, C.S.

    2009-01-01

    Use of a solar cell in concentrator PV technology requires reduction in its series resistance in order to minimize the resistive power losses. The present paper discusses a methodology of reducing the series resistance of a commercial c-Si solar cell for concentrator applications, in the range of 2 to 10 suns. Step by step optimization of commercial cell in terms of grid geometry, junction depth, and electroplating of the front metal contacts is proposed. A model of resistance network of solar cell is developed and used for the optimization. Efficiency of un optimized commercial cell at 10 suns drops by 30% of its 1 sun value corresponding to resistive power loss of about 42%. The optimized cell with grid optimization, junction optimization, electroplating, and junction optimized with electroplated contacts cell gives resistive power loss of 20%, 16%, 11%, and 8%, respectively. An efficiency gain of 3% at 10 suns for fully optimized cell is estimated

  14. Step-by-Step Visual Manuals: Design and Development

    Science.gov (United States)

    Urata, Toshiyuki

    2004-01-01

    The types of handouts and manuals that are used in technology training vary. Some describe procedures in a narrative way without graphics; some employ step-by-step instructions with screen captures. According to Thirlway (1994), a training manual should be like a tutor that permits a student to learn at his own pace and gives him confidence for…

  15. Optimizing the multicycle subrotational internal cooling of diatomic molecules

    Science.gov (United States)

    Aroch, A.; Kallush, S.; Kosloff, R.

    2018-05-01

    Subrotational cooling of the AlH+ ion to the miliKelvin regime, using optimally shaped pulses, is computed. The coherent electromagnetic fields induce purity-conserved transformations and do not change the sample temperature. A decrease in a sample temperature, manifested by an increase of purity, is achieved by the complementary uncontrolled spontaneous emission which changes the entropy of the system. We employ optimal control theory to find a pulse that stirs the system into a population configuration that will result in cooling, upon multicycle excitation-emission steps. The obtained optimal transformation was shown capable to cool molecular ions to the subkelvins regime.

  16. Improving the Dynamic Characteristics of Body-in-White Structure Using Structural Optimization

    Directory of Open Access Journals (Sweden)

    Aizzat S. Yahaya Rashid

    2014-01-01

    Full Text Available The dynamic behavior of a body-in-white (BIW structure has significant influence on the noise, vibration, and harshness (NVH and crashworthiness of a car. Therefore, by improving the dynamic characteristics of BIW, problems and failures associated with resonance and fatigue can be prevented. The design objectives attempt to improve the existing torsion and bending modes by using structural optimization subjected to dynamic load without compromising other factors such as mass and stiffness of the structure. The natural frequency of the design was modified by identifying and reinforcing the structure at critical locations. These crucial points are first identified by topology optimization using mass and natural frequencies as the design variables. The individual components obtained from the analysis go through a size optimization step to find their target thickness of the structure. The thickness of affected regions of the components will be modified according to the analysis. The results of both optimization steps suggest several design modifications to achieve the target vibration specifications without compromising the stiffness of the structure. A method of combining both optimization approaches is proposed to improve the design modification process.

  17. Unconstrained steps of myosin VI appear longest among known molecular motors.

    Science.gov (United States)

    Ali, M Yusuf; Homma, Kazuaki; Iwane, Atsuko Hikikoshi; Adachi, Kengo; Itoh, Hiroyasu; Kinosita, Kazuhiko; Yanagida, Toshio; Ikebe, Mitsuo

    2004-06-01

    Myosin VI is a two-headed molecular motor that moves along an actin filament in the direction opposite to most other myosins. Previously, a single myosin VI molecule has been shown to proceed with steps that are large compared to its neck size: either it walks by somehow extending its neck or one head slides along actin for a long distance before the other head lands. To inquire into these and other possible mechanism of motility, we suspended an actin filament between two plastic beads, and let a single myosin VI molecule carrying a bead duplex move along the actin. This configuration, unlike previous studies, allows unconstrained rotation of myosin VI around the right-handed double helix of actin. Myosin VI moved almost straight or as a right-handed spiral with a pitch of several micrometers, indicating that the molecule walks with strides slightly longer than the actin helical repeat of 36 nm. The large steps without much rotation suggest kinesin-type walking with extended and flexible necks, but how to move forward with flexible necks, even under a backward load, is not clear. As an answer, we propose that a conformational change in the lifted head would facilitate landing on a forward, rather than backward, site. This mechanism may underlie stepping of all two-headed molecular motors including kinesin and myosin V.

  18. An optimal open/closed-loop control method with application to a pre-stressed thin duralumin plate

    Science.gov (United States)

    Nadimpalli, Sruthi Raju

    The excessive vibrations of a pre-stressed duralumin plate, suppressed by a combination of open-loop and closed-loop controls, also known as open/closed-loop control, is studied in this thesis. The two primary steps involved in this process are: Step (I) with an assumption that the closed-loop control law is proportional, obtain the optimal open-loop control by direct minimization of the performance measure consisting of energy at terminal time and a penalty on open-loop control force via calculus of variations. If the performance measure also involves a penalty on closed-loop control effort then a Fourier based method is utilized. Step (II) the energy at terminal time is minimized numerically to obtain optimal values of feedback gains. The optimal closed-loop control gains obtained are used to describe the displacement and the velocity of open-loop, closed-loop and open/closed-loop controlled duralumin plate.

  19. Slot Optimization Design of Induction Motor for Electric Vehicle

    Science.gov (United States)

    Shen, Yiming; Zhu, Changqing; Wang, Xiuhe

    2018-01-01

    Slot design of induction motor has a great influence on its performance. The RMxprt module based on magnetic circuit method can be used to analyze the influence of rotor slot type on motor characteristics and optimize slot parameters. In this paper, the authors take an induction motor of electric vehicle for a typical example. The first step of the design is to optimize the rotor slot by RMxprt, and then compare the main performance of the motor before and after the optimization through Ansoft Maxwell 2D. After that, the combination of optimum slot type and the optimum parameters are obtained. The results show that the power factor and the starting torque of the optimized motor have been improved significantly. Furthermore, the electric vehicle works at a better running status after the optimization.

  20. Introduction to optimization analysis in hydrosystem engineering

    CERN Document Server

    Goodarzi, Ehsan; Hosseinipour, Edward Zia

    2014-01-01

    This book presents the basics of linear and nonlinear optimization analysis for both single and multi-objective problems in hydrosystem engineering.  The book includes several examples with various levels of complexity in different fields of water resources engineering. All examples are solved step by step to assist the reader and to make it easier to understand the concepts. In addition, the latest tools and methods are presented to help students, researchers, engineers and water managers to properly conceptualize and formulate resource allocation problems, and to deal with the complexity of constraints in water demand and available supplies in an appropriate way.

  1. Performance of an attention-demanding task during treadmill walking shifts the noise qualities of step-to-step variation in step width.

    Science.gov (United States)

    Grabiner, Mark D; Marone, Jane R; Wyatt, Marilynn; Sessoms, Pinata; Kaufman, Kenton R

    2018-06-01

    The fractal scaling evident in the step-to-step fluctuations of stepping-related time series reflects, to some degree, neuromotor noise. The primary purpose of this study was to determine the extent to which the fractal scaling of step width, step width and step width variability are affected by performance of an attention-demanding task. We hypothesized that the attention-demanding task would shift the structure of the step width time series toward white, uncorrelated noise. Subjects performed two 10-min treadmill walking trials, a control trial of undisturbed walking and a trial during which they performed a mental arithmetic/texting task. Motion capture data was converted to step width time series, the fractal scaling of which were determined from their power spectra. Fractal scaling decreased by 22% during the texting condition (p Step width and step width variability increased 19% and five percent, respectively (p step width fractal scaling. The change of the fractal scaling of step width is consistent with increased cognitive demand and suggests a transition in the characteristics of the signal noise. This may reflect an important advance toward the understanding of the manner in which neuromotor noise contributes to some types of falls. However, further investigation of the repeatability of the results, the sensitivity of the results to progressive increases in cognitive load imposed by attention-demanding tasks, and the extent to which the results can be generalized to the gait of older adults seems warranted. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Optimal Design of Gradient Materials and Bi-Level Optimization of Topology Using Targets (BOTT)

    Science.gov (United States)

    Garland, Anthony

    of gradient material designs. A macroscopic gradient can be achieved by varying the microstructure or the mesostructures of an object. The mesostructure interpretation allows for more design freedom since the mesostructures can be tuned to have non-isotropic material properties. A new algorithm called Bi-level Optimization of Topology using Targets (BOTT) seeks to find the best distribution of mesostructure designs throughout a single object in order to minimize an objective value. On the macro level, the BOTT algorithm optimizes the macro topology and gradient material properties within the object. The BOTT algorithm optimizes the material gradient by finding the best constitutive matrix at each location with the object. In order to enhance the likelihood that a mesostructure can be generated with the same equivalent constitutive matrix, the variability of the constitutive matrix is constrained to be an orthotropic material. The stiffness in the X and Y directions (of the base coordinate system) can change in addition to rotating the orthotropic material to align with the loading at each region. Second, the BOTT algorithm designs mesostructures with macroscopic properties equal to the target properties found in step one while at the same time the algorithm seeks to minimize material usage in each mesostructure. The mesostructure algorithm maximizes the strain energy of the mesostructures unit cell when a pseudo strain is applied to the cell. A set of experiments reveals the fundamental relationship between target cell density and the strain (or pseudo strain) applied to a unit cell and the output effective properties of the mesostructure. At low density, a few mesostructure unit cell design are possible, while at higher density the mesostructure unit cell designs have many possibilities. Therefore, at low densities the effective properties of the mesostructure are a step function of the applied pseudo strain. At high densities, the effective properties of the

  3. Two-Step Oxidation of Refractory Gold Concentrates with Different Microbial Communities.

    Science.gov (United States)

    Wang, Guo-Hua; Xie, Jian-Ping; Li, Shou-Peng; Guo, Yu-Jie; Pan, Ying; Wu, Haiyan; Liu, Xin-Xing

    2016-11-28

    Bio-oxidation is an effective technology for treatment of refractory gold concentrates. However, the unsatisfactory oxidation rate and long residence time, which cause a lower cyanide leaching rate and gold recovery, are key factors that restrict the application of traditional bio-oxidation technology. In this study, the oxidation rate of refractory gold concentrates and the adaption of microorganisms were analyzed to evaluate a newly developed two-step pretreatment process, which includes a high temperature chemical oxidation step and a subsequent bio-oxidation step. The oxidation rate and recovery rate of gold were improved significantly after the two-step process. The results showed that the highest oxidation rate of sulfide sulfur could reach to 99.01 % with an extreme thermophile microbial community when the pulp density was 5%. Accordingly, the recovery rate of gold was elevated to 92.51%. Meanwhile, the results revealed that moderate thermophiles performed better than acidophilic mesophiles and extreme thermophiles, whose oxidation rates declined drastically when the pulp density was increased to 10% and 15%. The oxidation rates of sulfide sulfur with moderate thermophiles were 93.94% and 65.73% when the pulp density was increased to 10% and 15%, respectively. All these results indicated that the two-step pretreatment increased the oxidation rate of refractory gold concentrates and is a potential technology to pretreat the refractory sample. Meanwhile, owing to the sensitivity of the microbial community under different pulp density levels, the optimization of microbial community in bio-oxidation is necessary in industry.

  4. Flow-based market coupling. Stepping stone towards nodal pricing?

    International Nuclear Information System (INIS)

    Van der Welle, A.J.

    2012-07-01

    For achieving one internal energy market for electricity by 2014, market coupling is deployed to integrate national markets into regional markets and ultimately one European electricity market. The extent to which markets can be coupled depends on the available transmission capacities between countries. Since interconnections are congested from time to time, congestion management methods are deployed to divide the scarce available transmission capacities over market participants. For further optimization of the use of available transmission capacities while maintaining current security of supply levels, flow-based market coupling (FBMC) will be implemented in the CWE region by 2013. Although this is an important step forward, important hurdles for efficient congestion management remain. Hence, flow based market coupling is compared to nodal pricing, which is often considered as the most optimal solution from theoretical perspective. In the context of decarbonised power systems it is concluded that advantages of nodal pricing are likely to exceed its disadvantages, warranting further development of FBMC in the direction of nodal pricing.

  5. System Engineering Infrastructure Evolution Galileo IOV and the Steps Beyond

    Science.gov (United States)

    Eickhoff, J.; Herpel, H.-J.; Steinle, T.; Birn, R.; Steiner, W.-D.; Eisenmann, H.; Ludwig, T.

    2009-05-01

    The trends to more and more constrained financial budgets in satellite engineering require a permanent optimization of the S/C system engineering processes and infrastructure. Astrium in the recent years already has built up a system simulation infrastructure - the "Model-based Development & Verification Environment" - which meanwhile is well known all over Europe and is established as Astrium's standard approach for ESA, DLR projects and now even the EU/ESA-Project Galileo IOV. The key feature of the MDVE / FVE approach is to provide entire S/C simulation (with full featured OBC simulation) already in early phases to start OBSW code tests on a simulated S/C and to later add hardware in the loop step by step up to an entire "Engineering Functional Model (EFM)" or "FlatSat". The subsequent enhancements to this simulator infrastructure w.r.t. spacecraft design data handling are reported in the following sections.

  6. Automated property optimization via ab initio O(N) elongation method: Application to (hyper-)polarizability in DNA

    Energy Technology Data Exchange (ETDEWEB)

    Orimoto, Yuuichi, E-mail: orimoto.yuuichi.888@m.kyushu-u.ac.jp [Department of Material Sciences, Faculty of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Aoki, Yuriko [Department of Material Sciences, Faculty of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Japan Science and Technology Agency, CREST, 4-1-8 Hon-chou, Kawaguchi, Saitama 332-0012 (Japan)

    2016-07-14

    An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between “choose-maximum” (choose a base pair giving the maximum β for each step) and “choose-minimum” (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.

  7. Automated property optimization via ab initio O(N) elongation method: Application to (hyper-)polarizability in DNA

    International Nuclear Information System (INIS)

    Orimoto, Yuuichi; Aoki, Yuriko

    2016-01-01

    An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between “choose-maximum” (choose a base pair giving the maximum β for each step) and “choose-minimum” (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.

  8. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  9. A step-by-step translation of evidence into a psychosocial intervention for everyday activities in dementia: a focus group study.

    Science.gov (United States)

    Giebel, Clarissa M; Challis, David; Hooper, Nigel M; Ferris, Sally

    2018-03-01

    In order to increase the efficacy of psychosocial interventions in dementia, a step-by-step process translating evidence and public engagement should be adhered to. This paper describes such a process by involving a two-stage focus group with people with dementia (PwD), informal carers, and staff. Based on previous evidence, general aspects of effective interventions were drawn out. These were tested in the first stage of focus groups, one with informal carers and PwD and one with staff. Findings from this stage helped shape the intervention further specifying its content. In the second stage, participants were consulted about the detailed components. The extant evidence base and focus groups helped to identify six practical and situation-specific elements worthy of consideration in planning such an intervention, including underlying theory and personal motivations for participation. Carers, PwD, and staff highlighted the importance of rapport between practitioners and PwD prior to commencing the intervention. It was also considered important that the intervention would be personalised to each individual. This paper shows how valuable public involvement can be to intervention development, and outlines a process of public involvement for future intervention development. The next step would be to formally test the intervention.

  10. Comparison of microbial community shifts in two parallel multi-step drinking water treatment processes.

    Science.gov (United States)

    Xu, Jiajiong; Tang, Wei; Ma, Jun; Wang, Hong

    2017-07-01

    Drinking water treatment processes remove undesirable chemicals and microorganisms from source water, which is vital to public health protection. The purpose of this study was to investigate the effects of treatment processes and configuration on the microbiome by comparing microbial community shifts in two series of different treatment processes operated in parallel within a full-scale drinking water treatment plant (DWTP) in Southeast China. Illumina sequencing of 16S rRNA genes of water samples demonstrated little effect of coagulation/sedimentation and pre-oxidation steps on bacterial communities, in contrast to dramatic and concurrent microbial community shifts during ozonation, granular activated carbon treatment, sand filtration, and disinfection for both series. A large number of unique operational taxonomic units (OTUs) at these four treatment steps further illustrated their strong shaping power towards the drinking water microbial communities. Interestingly, multidimensional scaling analysis revealed tight clustering of biofilm samples collected from different treatment steps, with Nitrospira, the nitrite-oxidizing bacteria, noted at higher relative abundances in biofilm compared to water samples. Overall, this study provides a snapshot of step-to-step microbial evolvement in multi-step drinking water treatment systems, and the results provide insight to control and manipulation of the drinking water microbiome via optimization of DWTP design and operation.

  11. Effect of One-Step and Multi-Steps Polishing System on Enamel Roughness

    Directory of Open Access Journals (Sweden)

    Cynthia Sumali

    2013-07-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The final procedures of orthodontic treatment are bracket debonding and cleaning the remaining adhesive. Multi-step polishing system is the most common method used. The disadvantage of that system is long working time, because of the stages that should be done. Therefore, dental material manufacturer make an improvement to the system, to reduce several stages into one stage only. This new system is known as one-step polishing system. Objective: To compare the effect of one-step and multi-step polishing system on enamel roughness after orthodontic bracket debonding. Methods: Randomized control trial was conducted included twenty-eight maxillary premolar randomized into two polishing system; one-step OptraPol (Ivoclar, Vivadent and multi-step AstroPol (Ivoclar, Vivadent. After bracket debonding, the remaining adhesive on each group was cleaned by subjective polishing system for ninety seconds using low speed handpiece. The enamel roughness was subjected to profilometer, registering two roughness parameters (Ra, Rz. Independent t-test was used to analyze the mean score of enamel roughness in each group. Results: There was no significant difference of enamel roughness between one-step and multi-step polishing system (p>0.005. Conclusion: One-step polishing system can produce a similar enamel roughness to multi-step polishing system after bracket debonding and adhesive cleaning.DOI: 10.14693/jdi.v19i3.136

  12. Subthreshold SPICE Model Optimization

    Science.gov (United States)

    Lum, Gregory; Au, Henry; Neff, Joseph; Bozeman, Eric; Kamin, Nick; Shimabukuro, Randy

    2011-04-01

    The first step in integrated circuit design is the simulation of said design in software to verify proper functionally and design requirements. Properties of the process are provided by fabrication foundries in the form of SPICE models. These SPICE models contain the electrical data and physical properties of the basic circuit elements. A limitation of these models is that the data collected by the foundry only accurately model the saturation region. This is fine for most users, but when operating devices in the subthreshold region they are inadequate for accurate simulation results. This is why optimizing the current SPICE models to characterize the subthreshold region is so important. In order to accurately simulate this region of operation, MOSFETs of varying widths and lengths are fabricated and the electrical test data is collected. From the data collected the parameters of the model files are optimized through parameter extraction rather than curve fitting. With the completed optimized models the circuit designer is able to simulate circuit designs for the sub threshold region accurately.

  13. Superstructure optimization of biodiesel production from microalgal biomass

    DEFF Research Database (Denmark)

    Rizwan, Muhammad; Lee, Jay H.; Gani, Rafiqul

    2013-01-01

    In this study, we propose a mixed integer nonlinear programming (MINLP) model for superstructure based optimization of biodiesel production from microalgal biomass. The proposed superstructure includes a number of major processing steps for the production of biodiesel from microalgal biomass...... for the production of biodiesel from microalgae. The proposed methodology is tested by implementing on a specific case study. The MINLP model is implemented and solved in GAMS using a database built in Excel. The results from the optimization are analyzed and their significances are discussed....

  14. Optimization and evaluation of probabilistic-logic sequence models

    DEFF Research Database (Denmark)

    Christiansen, Henning; Lassen, Ole Torp

    to, in principle, Turing complete languages. In general, such models are computationally far to complex for direct use, so optimization by pruning and approximation are needed. % The first steps are made towards a methodology for optimizing such models by approximations using auxiliary models......Analysis of biological sequence data demands more and more sophisticated and fine-grained models, but these in turn introduce hard computational problems. A class of probabilistic-logic models is considered, which increases the expressibility from HMM's and SCFG's regular and context-free languages...

  15. PV-PCM integration in glazed building. Co-simulation and genetic optimization study

    DEFF Research Database (Denmark)

    Elarga, Hagar; Dal Monte, Andrea; Andersen, Rune Korsholm

    2017-01-01

    . An exploratory step has also been considered prior to the optimization algorithm: it evaluates the energy profiles before and after the application of PCM to PV module integrated in glazed building. The optimization analysis investigate parameters such as ventilation flow rates and time schedule to obtain......The study describes a multi-objective optimization algorithm for an innovative integration of forced ventilated PV-PCM modules in glazed façade buildings: the aim is to identify and optimize the parameters that most affect thermal and energy performances. 1-D model, finite difference method FDM...

  16. Microsoft® Visual Basic® 2010 Step by Step

    CERN Document Server

    Halvorson, Michael

    2010-01-01

    Your hands-on, step-by-step guide to learning Visual Basic® 2010. Teach yourself the essential tools and techniques for Visual Basic® 2010-one step at a time. No matter what your skill level, you'll find the practical guidance and examples you need to start building professional applications for Windows® and the Web. Discover how to: Work in the Microsoft® Visual Studio® 2010 Integrated Development Environment (IDE)Master essential techniques-from managing data and variables to using inheritance and dialog boxesCreate professional-looking UIs; add visual effects and print supportBuild com

  17. Topology optimization for transient heat transfer problems

    DEFF Research Database (Denmark)

    Zeidan, Said; Sigmund, Ole; Lazarov, Boyan Stefanov

    The focus of this work is on passive control of transient heat transfer problems using the topology optimization (TopOpt) method [1]. The goal is to find distributions of a limited amount of phase change material (PCM), within a given design domain, which optimizes the heat energy storage [2]. Our......, TopOpt has later been extended to transient problems in mechanics and photonics (e.g. [5], [6] and [7]). In the presented approach, the optimization is gradient-based, where in each iteration the non-steady heat conduction equation is solved,using the finite element method and an appropriate time......-stepping scheme. A PCM can efficiently absorb heat while keeping its temperature nearly unchanged [8]. The use of PCM ine.g. electronics [9] and mechanics [10], yields improved performance and lower costs depending on a.o., the spatial distribution of PCM.The considered problem consists in optimizing...

  18. Optimal Placing of Wind Turbines: Modelling the Uncertainty

    NARCIS (Netherlands)

    Leenman, T.S.; Phillipson, F.

    2015-01-01

    When looking at the optimal place to locate a wind turbine, trade-offs have to be made between local placement and spreading: transmission loss favours local placements and the correlation between the stochastic productions of wind turbines favours spreading. In this paper steps are described to

  19. Optimal Placing of Wind Turbines: Modelling the Uncertainty

    NARCIS (Netherlands)

    Leenman, T.S.; Phillipson, F.

    2014-01-01

    When looking at the optimal place to locate a wind turbine, trade-offs have to be made between local placement and spreading: transmission loss favours local placements and the correlation between the stochastic productions of wind turbines favours spreading. In this paper steps are described to

  20. Age-related changes in gait adaptability in response to unpredictable obstacles and stepping targets.

    Science.gov (United States)

    Caetano, Maria Joana D; Lord, Stephen R; Schoene, Daniel; Pelicioni, Paulo H S; Sturnieks, Daina L; Menant, Jasmine C

    2016-05-01

    A large proportion of falls in older people occur when walking. Limitations in gait adaptability might contribute to tripping; a frequently reported cause of falls in this group. To evaluate age-related changes in gait adaptability in response to obstacles or stepping targets presented at short notice, i.e.: approximately two steps ahead. Fifty older adults (aged 74±7 years; 34 females) and 21 young adults (aged 26±4 years; 12 females) completed 3 usual gait speed (baseline) trials. They then completed the following randomly presented gait adaptability trials: obstacle avoidance, short stepping target, long stepping target and no target/obstacle (3 trials of each). Compared with the young, the older adults slowed significantly in no target/obstacle trials compared with the baseline trials. They took more steps and spent more time in double support while approaching the obstacle and stepping targets, demonstrated poorer stepping accuracy and made more stepping errors (failed to hit the stepping targets/avoid the obstacle). The older adults also reduced velocity of the two preceding steps and shortened the previous step in the long stepping target condition and in the obstacle avoidance condition. Compared with their younger counterparts, the older adults exhibited a more conservative adaptation strategy characterised by slow, short and multiple steps with longer time in double support. Even so, they demonstrated poorer stepping accuracy and made more stepping errors. This reduced gait adaptability may place older adults at increased risk of falling when negotiating unexpected hazards. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Feasibility of Focused Stepping Practice During Inpatient Rehabilitation Poststroke and Potential Contributions to Mobility Outcomes.

    Science.gov (United States)

    Hornby, T George; Holleran, Carey L; Leddy, Abigail L; Hennessy, Patrick; Leech, Kristan A; Connolly, Mark; Moore, Jennifer L; Straube, Donald; Lovell, Linda; Roth, Elliot

    2015-01-01

    Optimal physical therapy strategies to maximize locomotor function in patients early poststroke are not well established. Emerging data indicate that substantial amounts of task-specific stepping practice may improve locomotor function, although stepping practice provided during inpatient rehabilitation is limited (inpatient rehabilitation following implementation of a focused training program to maximize stepping practice during clinical physical therapy sessions. Primary outcomes included distance and physical assistance required during a 6-minute walk test (6MWT) and balance using the Berg Balance Scale (BBS). Retrospective data analysis included multiple regression techniques to evaluate the contributions of demographics, training activities, and baseline motor function to primary outcomes at discharge. Median stepping activity recorded from patients was 1516 steps/d, which is 5 to 6 times greater than that typically observed. The number of steps per day was positively correlated with both discharge 6MWT and BBS and improvements from baseline (changes; r = 0.40-0.87), independently contributing 10% to 31% of the total variance. Stepping activity also predicted level of assistance at discharge and discharge location (home vs other facility). Providing focused, repeated stepping training was feasible early poststroke during inpatient rehabilitation and was related to mobility outcomes. Further research is required to evaluate the effectiveness of these training strategies on short- or long-term mobility outcomes as compared with conventional interventions. © The Author(s) 2015.

  2. How to use an optimization-based method capable of balancing safety, reliability, and weight in an aircraft design process

    International Nuclear Information System (INIS)

    Johansson, Cristina; Derelov, Micael; Olvander, Johan

    2017-01-01

    In order to help decision-makers in the early design phase to improve and make more cost-efficient system safety and reliability baselines of aircraft design concepts, a method (Multi-objective Optimization for Safety and Reliability Trade-off) that is able to handle trade-offs such as system safety, system reliability, and other characteristics, for instance weight and cost, is used. Multi-objective Optimization for Safety and Reliability Trade-off has been developed and implemented at SAAB Aeronautics. The aim of this paper is to demonstrate how the implemented method might work to aid the selection of optimal design alternatives. The method is a three-step method: step 1 involves the modelling of each considered target, step 2 is optimization, and step 3 is the visualization and selection of results (results processing). The analysis is performed within Architecture Design and Preliminary Design steps, according to the company's Product Development Process. The lessons learned regarding the use of the implemented trade-off method in the three cases are presented. The results are a handful of solutions, a basis to aid in the selection of a design alternative. While the implementation of the trade-off method is performed for companies, there is nothing to prevent adapting this method, with minimal modifications, for use in other industrial applications

  3. How to use an optimization-based method capable of balancing safety, reliability, and weight in an aircraft design process

    Energy Technology Data Exchange (ETDEWEB)

    Johansson, Cristina [Mendeley, Broderna Ugglasgatan, Linkoping (Sweden); Derelov, Micael; Olvander, Johan [Linkoping University, IEI, Dept. of Machine Design, Linkoping (Sweden)

    2017-03-15

    In order to help decision-makers in the early design phase to improve and make more cost-efficient system safety and reliability baselines of aircraft design concepts, a method (Multi-objective Optimization for Safety and Reliability Trade-off) that is able to handle trade-offs such as system safety, system reliability, and other characteristics, for instance weight and cost, is used. Multi-objective Optimization for Safety and Reliability Trade-off has been developed and implemented at SAAB Aeronautics. The aim of this paper is to demonstrate how the implemented method might work to aid the selection of optimal design alternatives. The method is a three-step method: step 1 involves the modelling of each considered target, step 2 is optimization, and step 3 is the visualization and selection of results (results processing). The analysis is performed within Architecture Design and Preliminary Design steps, according to the company's Product Development Process. The lessons learned regarding the use of the implemented trade-off method in the three cases are presented. The results are a handful of solutions, a basis to aid in the selection of a design alternative. While the implementation of the trade-off method is performed for companies, there is nothing to prevent adapting this method, with minimal modifications, for use in other industrial applications.

  4. Can occupational exposure be optimized for medical workers?

    International Nuclear Information System (INIS)

    Aubert, B.; Lefaure, C.

    1998-01-01

    Implementation of the principle of optimization (ALARA), an essential radiation protection regulations, remains very limited in the medical field, even though 80 % of workers whose exposure exceeds 50 mSv are to be found in this domain. The doses measured by legal dosimetry sometimes underestimate the real exposure of workers. It is therefore necessary to optimize the protection of occupational exposure in the medical field. This paper reviews the steps of the optimization procedure with emphasis on specificity of its application in this domain. Operating dosimetry as well as information on the residual risk due to low exposures and a better estimation of the risk/benefit factor for the patient are needed for satisfactory implementation. (author)

  5. PBCO/YBCO bilayer growth and optimization for the fabrication of buffered step-edge Josephson junctions

    CSIR Research Space (South Africa)

    Van Staden, WF

    2009-04-01

    Full Text Available Bilayers of PBCO and YBCO are grown epitaxially on MgO substrates using PLD. In this paper, researchers discuss the entire optimization process in detail, giving quantitative parameter values. Film characterization included XRD, AFM and susceptance...

  6. Optimization of a stellarator design including modulation of the helical winding geometry

    International Nuclear Information System (INIS)

    Sharp, L.E.; Petersen, L.F.; Blamey, J.W.

    1979-06-01

    The optimization of the helical winding geometry of the next generation of high performance stellarators is of critical importance as the current in the helical conductors must be kept to a minimum to reduce the very large electromechanical forces on the conductors. Using a modified version of the Culham computer code MAGBAT, steps towards optimization are described

  7. Sequential Optimization of Global Sequence Alignments Relative to Different Cost Functions

    KAUST Repository

    Odat, Enas M.

    2011-01-01

    The algorithm has been simulated using C#.Net programming language and a number of experiments have been done to verify the proved statements. The results of these experiments show that the number of optimal alignments is reduced after each step of optimization. Furthermore, it has been verified that as the sequence length increased linearly then the number of optimal alignments increased exponentially which also depends on the cost function that is used. Finally, the number of executed operations increases polynomially as the sequence length increase linearly.

  8. Abused nurses take no legal steps: a domestic violence study carried out in eastern Turkey.

    Science.gov (United States)

    Selek, Salih; Vural, Mehmet; Cakmak, Ilknur

    2012-12-01

    Our aim was to evaluate domestic violence among nurses in eastern Turkey. Ninety six (96) female nurses with an intimate partner were enrolled. Modified form of Abuse Assessment Screen Questionnaire was used. Twenty two (22.7%) of the participants reported domestic violence. None of them took legal steps. Most frequent domestic violence type was economic abuse (46%). Nurses, whose mothers were exposed to domestic violence, had significantly higher abuse rates. The abused group had also significantly higher smoking and miscarriage rates. Nurses need to be well informed for taking legal steps in case of domestic violence. Family history, smoking status and abortion rates may be further research focus for risk factors of domestic violence. Legal interventions should be optimized in order to encourage the victims to take legal steps.

  9. Wind farm design optimization

    Energy Technology Data Exchange (ETDEWEB)

    Carreau, Michel; Morgenroth, Michael; Belashov, Oleg; Mdimagh, Asma; Hertz, Alain; Marcotte, Odile

    2010-09-15

    Innovative numerical computer tools have been developed to streamline the estimation, the design process and to optimize the Wind Farm Design with respect to the overall return on investment. The optimization engine can find the collector system layout automatically which provide a powerful tool to quickly study various alternative taking into account more precisely various constraints or factors that previously would have been too costly to analyze in details with precision. Our Wind Farm Tools have evolved through numerous projects and created value for our clients yielding Wind Farm projects with projected higher returns.

  10. Evaluation of postural stability during quiet standing, step-up and step-up with lateral perturbation in subjects with and without low back pain.

    Directory of Open Access Journals (Sweden)

    M. Ram Prasad

    2011-02-01

    Full Text Available The  evaluation  of  postural  stability  during  quiet stance,  step  up  and step  up  task  with  perturbation  using posturography  could  be  useful  in treatment  and  outcome monitoring  in  chronic  low  back  pain rehabilitation  (CLBP.  The aims  of  this  study  were  twofold  and investigating  1  differences of postural stability measures between CLBP patients and healthy participants  during  above  mentioned  tasks.  2 postural  stability characteristics between control and movement impairment groups of  CLBP  patients  on  above  tasks.  Fourteen  CLBP  and fifteen normal  individuals  participated  and  posturography outcome variables  were  obtained  during  above  tasks.  The  low  back pain  subjects  showed  significantly  different  anterior-posterior (p=0 .01 as well as medio- lateral (p=0.05 postural stability characteristics during the step up task with external perturbation, whereas quiet standing and simple step up task did not show any differences. In addition to these values, in CLBP population, the maximum COP excursion (p=0.01, standard stability (p=0.02 and the stability scores (p=0.02 were also found significant in step up with perturbation task compared to healthy participants. As the task difficulty increases CLBP patients exhibited significantly different postural stability characteristics compared to healthy participants. Conversely, sub-group analysis in CLBP patients revealed significant differences only in medio-lateral COP excursions during normal standing (p=0.005. No significant differences were observed in tasks of higher difficulties such as step up and step up task with lateral perturbation in-between patients with movement and control impairment groups of CLBP. These findings have implications for assessment and optimizing postural control interventions on functional back pain rehabilitation.

  11. Comparison of direct machine parameter optimization versus fluence optimization with sequential sequencing in IMRT of hypopharyngeal carcinoma

    International Nuclear Information System (INIS)

    Dobler, Barbara; Pohl, Fabian; Bogner, Ludwig; Koelbl, Oliver

    2007-01-01

    To evaluate the effects of direct machine parameter optimization in the treatment planning of intensity-modulated radiation therapy (IMRT) for hypopharyngeal cancer as compared to subsequent leaf sequencing in Oncentra Masterplan v1.5. For 10 hypopharyngeal cancer patients IMRT plans were generated in Oncentra Masterplan v1.5 (Nucletron BV, Veenendal, the Netherlands) for a Siemens Primus linear accelerator. For optimization the dose volume objectives (DVO) for the planning target volume (PTV) were set to 53 Gy minimum dose and 59 Gy maximum dose, in order to reach a dose of 56 Gy to the average of the PTV. For the parotids a median dose of 22 Gy was allowed and for the spinal cord a maximum dose of 35 Gy. The maximum DVO to the external contour of the patient was set to 59 Gy. The treatment plans were optimized with the direct machine parameter optimization ('Direct Step & Shoot', DSS, Raysearch Laboratories, Sweden) newly implemented in Masterplan v1.5 and the fluence modulation technique ('Intensity Modulation', IM) which was available in previous versions of Masterplan already. The two techniques were compared with regard to compliance to the DVO, plan quality, and number of monitor units (MU) required per fraction dose. The plans optimized with the DSS technique met the DVO for the PTV significantly better than the plans optimized with IM (p = 0.007 for the min DVO and p < 0.0005 for the max DVO). No significant difference could be observed for compliance to the DVO for the organs at risk (OAR) (p > 0.05). Plan quality, target coverage and dose homogeneity inside the PTV were superior for the plans optimized with DSS for similar dose to the spinal cord and lower dose to the normal tissue. The mean dose to the parotids was lower for the plans optimized with IM. Treatment plan efficiency was higher for the DSS plans with (901 ± 160) MU compared to (1151 ± 157) MU for IM (p-value < 0.05). Renormalization of the IM plans to the mean of the

  12. Production of biovanillin by one-step biotransformation using fungus Pycnoporous cinnabarinus.

    Science.gov (United States)

    Tilay, Ashwini; Bule, Mahesh; Annapure, Uday

    2010-04-14

    The current study proposes a one-step biotransformation process for vanillin production from ferulic acid using the wild fungal strain Pycnoporous cinnabarinus belonging to the family Basidiomycete. Improvement of biotransformation conditions was performed in two steps; initially a one factor at a time method was used to investigate effects of medium composition variables (i.e., carbon, nitrogen) and environmental factors such as pH on vanillin production. Subsequently, concentrations of medium components were optimized using an orthogonal matrix method. After primary screening, glucose as carbon source and corn steep liquor and ammonium chloride as organic and inorganic nitrogen source, respectively, supported maximum biotransformation of ferulic acid to vanillin. Under statistically optimum conditions vanillin production from ferulic acid by P. cinnabarinus was 126 mg/L with a molar yield of 54%. The overall molar yield of vanillin production increased by 4 times.

  13. Core-shell polymer nanorods by a two-step template wetting process

    International Nuclear Information System (INIS)

    Dougherty, S; Liang, J

    2009-01-01

    One-dimensional core-shell polymer nanowires offer many advantages and great potential for many different applications. In this paper we introduce a highly versatile two-step template wetting process to fabricate two-component core-shell polymer nanowires with controllable shell thickness. PLLA and PMMA were chosen as model polymers to demonstrate the feasibility of this process. Solution wetting with different concentrations of polymer solutions was used to fabricate the shell layer and melt wetting was used to fill the shell with the core polymer. The shell thickness was analyzed as a function of the polymer solution concentration and viscosity, and the core-shell morphology was observed with TEM. This paper demonstrates the feasibility of fabricating polymer core-shell nanostructures using our two-step template wetting process and opens the arena for optimization and future experiments with polymers that are desirable for specific applications.

  14. Daily step count predicts acute exacerbations in a US cohort with COPD.

    Directory of Open Access Journals (Sweden)

    Marilyn L Moy

    Full Text Available BACKGROUND: COPD is characterized by variability in exercise capacity and physical activity (PA, and acute exacerbations (AEs. Little is known about the relationship between daily step count, a direct measure of PA, and the risk of AEs, including hospitalizations. METHODS: In an observational cohort study of 169 persons with COPD, we directly assessed PA with the StepWatch Activity Monitor, an ankle-worn accelerometer that measures daily step count. We also assessed exercise capacity with the 6-minute walk test (6MWT and patient-reported PA with the St. George's Respiratory Questionnaire Activity Score (SGRQ-AS. AEs and COPD-related hospitalizations were assessed and validated prospectively over a median of 16 months. RESULTS: Mean daily step count was 5804±3141 steps. Over 209 person-years of observation, there were 263 AEs (incidence rate 1.3±1.6 per person-year and 116 COPD-related hospitalizations (incidence rate 0.56±1.09 per person-year. Adjusting for FEV1 % predicted and prednisone use for AE in previous year, for each 1000 fewer steps per day walked at baseline, there was an increased rate of AEs (rate ratio 1.07; 95%CI = 1.003-1.15 and COPD-related hospitalizations (rate ratio 1.24; 95%CI = 1.08-1.42. There was a significant linear trend of decreasing daily step count by quartiles and increasing rate ratios for AEs (P = 0.008 and COPD-related hospitalizations (P = 0.003. Each 30-meter decrease in 6MWT distance was associated with an increased rate ratio of 1.07 (95%CI = 1.01-1.14 for AEs and 1.18 (95%CI = 1.07-1.30 for COPD-related hospitalizations. Worsening of SGRQ-AS by 4 points was associated with an increased rate ratio of 1.05 (95%CI = 1.01-1.09 for AEs and 1.10 (95%CI = 1.02-1.17 for COPD-related hospitalizations. CONCLUSIONS: Lower daily step count, lower 6MWT distance, and worse SGRQ-AS predict future AEs and COPD-related hospitalizations, independent of pulmonary function and previous AE

  15. Low power very high frequency resonant converter with high step down ratio

    DEFF Research Database (Denmark)

    Madsen, Mickey Pierre; Knott, Arnold; Andersen, Michael A. E.

    2013-01-01

    This paper presents the design of a resonant converter with a switching frequency in the very high frequency range (30-300MHz), a large step down ratio and low output power. This gives the designed converters specifications which are far from previous results. The class E inverter and rectifier...

  16. 4th International Conference on Frontiers in Global Optimization

    CERN Document Server

    Pardalos, Panos

    2004-01-01

    Global Optimization has emerged as one of the most exciting new areas of mathematical programming. Global optimization has received a wide attraction from many fields in the past few years, due to the success of new algorithms for addressing previously intractable problems from diverse areas such as computational chemistry and biology, biomedicine, structural optimization, computer sciences, operations research, economics, and engineering design and control. This book contains refereed invited papers submitted at the 4th international confer­ ence on Frontiers in Global Optimization held at Santorini, Greece during June 8-12, 2003. Santorini is one of the few sites of Greece, with wild beauty created by the explosion of a volcano which is in the middle of the gulf of the island. The mystic landscape with its numerous mult-extrema, was an inspiring location particularly for researchers working on global optimization. The three previous conferences on "Recent Advances in Global Opti­ mization", "State-of-the-...

  17. The NIST Step Class Library (Step Into the Future)

    Science.gov (United States)

    1990-09-01

    Figure 6. Excerpt from a STEP exclange file based on the Geometry model 1be NIST STEP Class Libary Page 13 An issue of concern in this...Scheifler, R., Gettys, J., and Newman, P., X Window System: C Library and Protocol Reference. Digital Press, Bedford, Mass, 1988. [Schenck90] Schenck, D

  18. Valve cam design using numerical step-by-step method

    OpenAIRE

    Vasilyev, Aleksandr; Bakhracheva, Yuliya; Kabore, Ousman; Zelenskiy, Yuriy

    2014-01-01

    This article studies the numerical step-by-step method of cam profile design. The results of the study are used for designing the internal combustion engine valve gear. This method allows to profile the peak efficiency of cams in view of many restrictions, connected with valve gear serviceability and reliability.

  19. Algorithm comparison for schedule optimization in MR fingerprinting.

    Science.gov (United States)

    Cohen, Ouri; Rosen, Matthew S

    2017-09-01

    In MR Fingerprinting, the flip angles and repetition times are chosen according to a pseudorandom schedule. In previous work, we have shown that maximizing the discrimination between different tissue types by optimizing the acquisition schedule allows reductions in the number of measurements required. The ideal optimization algorithm for this application remains unknown, however. In this work we examine several different optimization algorithms to determine the one best suited for optimizing MR Fingerprinting acquisition schedules. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Fast machine-learning online optimization of ultra-cold-atom experiments.

    Science.gov (United States)

    Wigley, P B; Everitt, P J; van den Hengel, A; Bastian, J W; Sooriyabandara, M A; McDonald, G D; Hardman, K S; Quinlivan, C D; Manju, P; Kuhn, C C N; Petersen, I R; Luiten, A N; Hope, J J; Robins, N P; Hush, M R

    2016-05-16

    We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our 'learner' discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system.

  1. Computing single step operators of logic programming in radial basis function neural networks

    Science.gov (United States)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  2. Diffusion tensor imaging fiber tracking with reliable tracking orientation and flexible step size☆

    Science.gov (United States)

    Yao, Xufeng; Wang, Manning; Chen, Xinrong; Nie, Shengdong; Li, Zhexu; Xu, Xiaoping; Zhang, Xuelong; Song, Zhijian

    2013-01-01

    We propose a method of reliable tracking orientation and flexible step size fiber tracking. A new directional strategy was defined to select one optimal tracking orientation from each directional set, which was based on the single-tensor model and the two-tensor model. The directional set of planar voxels contained three tracking directions: two from the two-tensor model and one from the single-tensor model. The directional set of linear voxels contained only one principal vector. In addition, a flexible step size, rather than fixable step sizes, was implemented to improve the accuracy of fiber tracking. We used two sets of human data to assess the performance of our method; one was from a healthy volunteer and the other from a patient with low-grade glioma. Results verified that our method was superior to the single-tensor Fiber Assignment by Continuous Tracking and the two-tensor eXtended Streamline Tractography for showing detailed images of fiber bundles. PMID:25206444

  3. Computing single step operators of logic programming in radial basis function neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  4. Computing single step operators of logic programming in radial basis function neural networks

    International Nuclear Information System (INIS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-01-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T p :I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks

  5. Single-step linking transition from superdeformed to spherical states in {sup 143}Eu

    Energy Technology Data Exchange (ETDEWEB)

    Atac, A.; Axelsson, A.; Persson, J. [Uppsala Univ. (Sweden)] [and others

    1996-12-31

    A discrete {gamma}-ray transition which connects the second lowest SD state with a normally deformed one in {sup 143}Eu has been discovered. It has an energy of 3360.6 keV and carries 3.2 % of the full intensity of the SD band. It feeds into a nearly spherical state which is above the I = 35/2{sup +}, E=4947 keV level. The exact placement of the single-step link could, however, not be established due to the especially complicated level scheme in the region of interest. The angular correlation study favours a stretched dipole character for the 3360.6 keV transition. The single-step link agrees well with the previously determined two-step links, both with respect to energy and spin.

  6. Optimal space-energy splitting in MCNP with the DSA

    International Nuclear Information System (INIS)

    Dubi, A.; Gurvitz, N.

    1990-01-01

    The Direct Statistical Approach (DSA) particle transport theory is based on the possibility of obtaining exact explicit expressions for the dependence of the second moment and calculation time on the splitting parameters. This allows the automatic optimization of the splitting parameters by ''learning'' the bulk parameters from which the problem dependent coefficients of the quality function (second moment time) are constructed. The above procedure was exploited to implement an automatic optimization of the splitting parameters in the Monte Carlo Neutron Photon (MCNP) code. This was done in a number of steps. In the first instance, only spatial surface splitting was considered. In this step, the major obstacle has been the truncation of an infinite series of ''products'' of ''surface path's'' leading from the source to the detector. Encouraging results from the first phase led to the inclusion of full space/energy phase space splitting. (author)

  7. Optimization of the triple-pressure combined cycle power plant

    Directory of Open Access Journals (Sweden)

    Alus Muammer

    2012-01-01

    Full Text Available The aim of this work was to develop a new system for optimization of parameters for combined cycle power plants (CCGTs with triple-pressure heat recovery steam generator (HRSG. Thermodynamic and thermoeconomic optimizations were carried out. The objective of the thermodynamic optimization is to enhance the efficiency of the CCGTs and to maximize the power production in the steam cycle (steam turbine gross power. Improvement of the efficiency of the CCGT plants is achieved through optimization of the operating parameters: temperature difference between the gas and steam (pinch point P.P. and the steam pressure in the HRSG. The objective of the thermoeconomic optimization is to minimize the production costs per unit of the generated electricity. Defining the optimal P.P. was the first step in the optimization procedure. Then, through the developed optimization process, other optimal operating parameters (steam pressure and condenser pressure were identified. The developed system was demonstrated for the case of a 282 MW CCGT power plant with a typical design for commercial combined cycle power plants. The optimized combined cycle was compared with the regular CCGT plant.

  8. Audiovisual integration increases the intentional step synchronization of side-by-side walkers.

    Science.gov (United States)

    Noy, Dominic; Mouta, Sandra; Lamas, Joao; Basso, Daniel; Silva, Carlos; Santos, Jorge A

    2017-12-01

    When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Step-by-Step Construction of Gene Co-expression Networks from High-Throughput Arabidopsis RNA Sequencing Data.

    Science.gov (United States)

    Contreras-López, Orlando; Moyano, Tomás C; Soto, Daniela C; Gutiérrez, Rodrigo A

    2018-01-01

    The rapid increase in the availability of transcriptomics data generated by RNA sequencing represents both a challenge and an opportunity for biologists without bioinformatics training. The challenge is handling, integrating, and interpreting these data sets. The opportunity is to use this information to generate testable hypothesis to understand molecular mechanisms controlling gene expression and biological processes (Fig. 1). A successful strategy to generate tractable hypotheses from transcriptomics data has been to build undirected network graphs based on patterns of gene co-expression. Many examples of new hypothesis derived from network analyses can be found in the literature, spanning different organisms including plants and specific fields such as root developmental biology.In order to make the process of constructing a gene co-expression network more accessible to biologists, here we provide step-by-step instructions using published RNA-seq experimental data obtained from a public database. Similar strategies have been used in previous studies to advance root developmental biology. This guide includes basic instructions for the operation of widely used open source platforms such as Bio-Linux, R, and Cytoscape. Even though the data we used in this example was obtained from Arabidopsis thaliana, the workflow developed in this guide can be easily adapted to work with RNA-seq data from any organism.

  10. Effect of Calcaneus Fracture Gap Without Step-Off on Stress Distribution Across the Subtalar Joint.

    Science.gov (United States)

    Barrick, Brett; Joyce, Donald A; Werner, Frederick W; Iannolo, Maria

    2017-03-01

    Subtalar arthritis is a common consequence following calcaneal fracture, and its development is related to the severity of the fracture. Previous calcaneal fracture models have demonstrated altered contact characteristics when a step-off is created in the posterior facet articular surface. Changes in posterior facet contact characteristics have not been previously characterized for calcaneal fracture gap without step-off. The contact characteristics (peak pressure, area of contact, and centroid of pressure) of the posterior facet of the subtalar joint were determined in 6 cadaveric specimens. After creating a calcaneal fracture to simulate a Sanders type II fracture, the contact characteristics were determined with the posterior facet anatomically reduced followed by an incremental increase in fracture gap displacement of 2, 3, and 5 mm without a step-off of the articular surface. Peak pressure on the medial fragment was significantly less with a 5-mm gap compared to a 2- or 3-mm gap, or reduced. On the lateral fragment, the peak pressure was significantly increased with a 5-mm gap compared to a 2- or 3-mm gap. Contact area significantly changed with increased gap. In this study, there were no significant differences in contact characteristics between a <3-mm gap and an anatomically reduced fracture, conceding the study limitations including limiting axial loading to 50% of donor body weight. A small amount of articular incongruity without a step-off can be tolerated by the subtalar joint, in contrast to articular incongruity with a step-off present.

  11. Steepest descent method implementation on unconstrained optimization problem using C++ program

    Science.gov (United States)

    Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.

    2018-03-01

    Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.

  12. Emergence of an optimal search strategy from a simple random walk.

    Science.gov (United States)

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-09-06

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.

  13. Influence maximization in complex networks through optimal percolation

    Science.gov (United States)

    Morone, Flaviano; Makse, Hernan; CUNY Collaboration; CUNY Collaboration

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. Reference: F. Morone, H. A. Makse, Nature 524,65-68 (2015)

  14. Meta-analyses and Forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis.

    Science.gov (United States)

    Neyeloff, Jeruza L; Fuchs, Sandra C; Moreira, Leila B

    2012-01-20

    Meta-analyses are necessary to synthesize data obtained from primary research, and in many situations reviews of observational studies are the only available alternative. General purpose statistical packages can meta-analyze data, but usually require external macros or coding. Commercial specialist software is available, but may be expensive and focused in a particular type of primary data. Most available softwares have limitations in dealing with descriptive data, and the graphical display of summary statistics such as incidence and prevalence is unsatisfactory. Analyses can be conducted using Microsoft Excel, but there was no previous guide available. We constructed a step-by-step guide to perform a meta-analysis in a Microsoft Excel spreadsheet, using either fixed-effect or random-effects models. We have also developed a second spreadsheet capable of producing customized forest plots. It is possible to conduct a meta-analysis using only Microsoft Excel. More important, to our knowledge this is the first description of a method for producing a statistically adequate but graphically appealing forest plot summarizing descriptive data, using widely available software.

  15. Meta-analyses and Forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis

    Directory of Open Access Journals (Sweden)

    Neyeloff Jeruza L

    2012-01-01

    Full Text Available Abstract Background Meta-analyses are necessary to synthesize data obtained from primary research, and in many situations reviews of observational studies are the only available alternative. General purpose statistical packages can meta-analyze data, but usually require external macros or coding. Commercial specialist software is available, but may be expensive and focused in a particular type of primary data. Most available softwares have limitations in dealing with descriptive data, and the graphical display of summary statistics such as incidence and prevalence is unsatisfactory. Analyses can be conducted using Microsoft Excel, but there was no previous guide available. Findings We constructed a step-by-step guide to perform a meta-analysis in a Microsoft Excel spreadsheet, using either fixed-effect or random-effects models. We have also developed a second spreadsheet capable of producing customized forest plots. Conclusions It is possible to conduct a meta-analysis using only Microsoft Excel. More important, to our knowledge this is the first description of a method for producing a statistically adequate but graphically appealing forest plot summarizing descriptive data, using widely available software.

  16. Simplified model-based optimal control of VAV air-conditioning system

    Energy Technology Data Exchange (ETDEWEB)

    Nassif, N.; Kajl, S.; Sabourin, R. [Ecole de Technologie Superieure, Montreal, PQ (Canada). Dept. of Construction Engineering

    2005-07-01

    The improvement of Variable Air Volume (VAV) system performance is one of several attempts being made to minimize the high energy use associated with the operation of heating, ventilation and air conditioning (HVAC) systems. A Simplified Optimization Process (SOP) comprised of controller set point strategies and a simplified VAV model was presented in this paper. The aim of the SOP was to determine supply set points. The advantage of the SOP over previous methods was that it did not require a detailed VAV model and optimization program. In addition, the monitored data for representative local-loop control can be checked on-line, after which controller set points can be updated in order to ensure proper operation by opting for real situations with minimum energy use. The SOP was validated using existing monitoring data and a model of an existing VAV system. Energy use simulations were compared to that of the existing VAV system. At each simulation step, 3 controller set point values were proposed and studied using the VAV model in order to select a value for each point which corresponded to the best performance of the VAV system. Simplified VAV component models were presented. Strategies for controller set points were described, including zone air temperature, duct static pressure set points; chilled water supply set points and supply air temperature set points. Simplified optimization process calculations were presented. Results indicated that the SOP provided significant energy savings when applied to specific AHU systems. In a comparison with a Detailed Optimization Process (DOP), the SOP was capable of determining set points close to those obtained by the DOP. However, it was noted that the controller set points determined by the SOP need a certain amount of time to reach optimal values when outdoor conditions or thermal loads are significantly changed. It was suggested that this disadvantage could be overcome by the use of a dynamic incremental value, which

  17. Leading Change Step-by-Step: Tactics, Tools, and Tales

    Science.gov (United States)

    Spiro, Jody

    2010-01-01

    "Leading Change Step-by-Step" offers a comprehensive and tactical guide for change leaders. Spiro's approach has been field-tested for more than a decade and proven effective in a wide variety of public sector organizations including K-12 schools, universities, international agencies and non-profits. The book is filled with proven tactics for…

  18. Proton pump inhibitor step-down therapy for GERD: A multi-center study in Japan

    Science.gov (United States)

    Tsuzuki, Takao; Okada, Hiroyuki; Kawahara, Yoshiro; Takenaka, Ryuta; Nasu, Junichiro; Ishioka, Hidehiko; Fujiwara, Akiko; Yoshinaga, Fumiya; Yamamoto, Kazuhide

    2011-01-01

    AIM: To investigate the predictors of success in step-down of proton pump inhibitor and to assess the quality of life (QOL). METHODS: Patients who had heartburn twice a week or more were treated with 20 mg omeprazole (OPZ) once daily for 8 wk as an initial therapy (study 1). Patients whose heartburn decreased to once a week or less at the end of the initial therapy were enrolled in study 2 and treated with 10 mg OPZ as maintenance therapy for an additional 6 mo (study 2). QOL was investigated using the gastrointestinal symptom rating scale (GSRS) before initial therapy, after both 4 and 8 wk of initial therapy, and at 1, 2, 3, and 6 mo after starting maintenance therapy. RESULTS: In study 1, 108 patients were analyzed. Their characteristics were as follows; median age: 63 (range: 20-88) years, sex: 46 women and 62 men. The success rate of the initial therapy was 76%. In the patients with successful initial therapy, abdominal pain, indigestion and reflux GSRS scores were improved. In study 2, 83 patients were analyzed. Seventy of 83 patients completed the study 2 protocol. In the per-protocol analysis, 80% of 70 patients were successful for step-down. On multivariate analysis of baseline demographic data and clinical information, no previous treatment for gastroesophageal reflux disease (GERD) [odds ratio (OR) 0.255, 95% CI: 0.06-0.98] and a lower indigestion score in GSRS at the beginning of step-down therapy (OR 0.214, 95% CI: 0.06-0.73) were found to be the predictors of successful step-down therapy. The improved GSRS scores by initial therapy were maintained through the step-down therapy. CONCLUSION: OPZ was effective for most GERD patients. However, those who have had previous treatment for GERD and experience dyspepsia before step-down require particular monitoring for relapse. PMID:21472108

  19. The PDB_REDO server for macromolecular structure model optimization

    Directory of Open Access Journals (Sweden)

    Robbie P. Joosten

    2014-07-01

    Full Text Available The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB. The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011, Structure, 19, 1395–1412]. The PDB_REDO procedure aims for `constructive validation', aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallographers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB.

  20. Fungal bioleaching of WPCBs using Aspergillus niger: Observation, optimization and kinetics.

    Science.gov (United States)

    Faraji, Fariborz; Golmohammadzadeh, Rabeeh; Rashchi, Fereshteh; Alimardani, Navid

    2018-07-01

    In this study, Aspergillus niger (A. niger) as an environmentally friendly agent for fungal bioleaching of waste printed circuit boards (WPCBs) was employed. D-optimal response surface methodology (RSM) was utilized for optimization of the bioleaching parameters including bioleaching method (one step, two step and spent medium) and pulp densities (0.5 g L -1 to 20 g L -1 ) to maximize the recovery of Zn, Ni and Cu from WPCBs. According to the high performance liquid chromatography analysis, citric, oxalic, malic and gluconic acids were the most abundant organic acids produced by A.niger in 21 days experiments. Maximum recoveries of 98.57% of Zn, 43.95% of Ni and 64.03% of Cu were achieved based on acidolysis and complexolysis dissolution mechanisms of organic acids. Based on the kinetic studies, the rate controlling mechanism for Zn dissolution at one step approach was found to be diffusion through liquid film, while it was found to be mixed control for both two step and spent medium. Furthermore, rate of Cu dissolution which is controlled by diffusion in one step and two step approaches, detected to be controlled by chemical reaction at spent medium. It was shown that for Ni, the rate is controlled by chemical reaction for all the methods studied. Eventually, it was understood that A. niger is capable of leaching 100% of Zn, 80.39% of Ni and 85.88% of Cu in 30 days. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Optimization of a bundle divertor for FED

    International Nuclear Information System (INIS)

    Hively, L.M.; Rothe, K.E.; Minkoff, M.

    1982-01-01

    Optimal double-T bundle divertor configurations have been obtained for the Fusion Engineering Device (FED). On-axis ripple is minimized, while satisfying a series of engineering constraints. The ensuing non-linear optimization problem is solved via a sequence of quadratic programming subproblems, using the VMCON algorithm. The resulting divertor designs are substantially improved over previous configurations

  2. The optimal extraction of feature algorithm based on KAZE

    Science.gov (United States)

    Yao, Zheyi; Gu, Guohua; Qian, Weixian; Wang, Pengcheng

    2015-10-01

    As a novel method of 2D features extraction algorithm over the nonlinear scale space, KAZE provide a special method. However, the computation of nonlinear scale space and the construction of KAZE feature vectors are more expensive than the SIFT and SURF significantly. In this paper, the given image is used to build the nonlinear space up to a maximum evolution time through the efficient Additive Operator Splitting (AOS) techniques and the variable conductance diffusion. Changing the parameter can improve the construction of nonlinear scale space and simplify the image conductivities for each dimension space, with the predigest computation. Then, the detection for points of interest can exhibit a maxima of the scale-normalized determinant with the Hessian response in the nonlinear scale space. At the same time, the detection of feature vectors is optimized by the Wavelet Transform method, which can avoid the second Gaussian smoothing in the KAZE Features and cut down the complexity of the algorithm distinctly in the building and describing vectors steps. In this way, the dominant orientation is obtained, similar to SURF, by summing the responses within a sliding circle segment covering an angle of π/3 in the circular area of radius 6σ with a sampling step of size σ one by one. Finally, the extraction in the multidimensional patch at the given scale, centered over the points of interest and rotated to align its dominant orientation to a canonical direction, is able to simplify the description of feature by reducing the description dimensions, just as the PCA-SIFT method. Even though the features are somewhat more expensive to compute than SIFT due to the construction of nonlinear scale space, but compared to SURF, the result revels a step forward in performance in detection, description and application against the previous ways by the following contrast experiments.

  3. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    Science.gov (United States)

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-07

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation

  4. Control strategy for power management, efficiency-optimization and operating-safety of a 5-kW solid oxide fuel cell system

    International Nuclear Information System (INIS)

    Zhang, Lin; Jiang, Jianhua; Cheng, Huan; Deng, Zhonghua; Li, Xi

    2015-01-01

    Highlights: • Efficiency optimization associated with simultaneous power and thermal management. • Fast load tracing, fuel starvation, high efficiency and operating safety are considered. • Open loop pre-conditioning current strategy is proposed for load step-up transients. • Feedback control scheme is proposed for load step-up transients. - Abstract: The slow power tracking, operating safety, especially the fuel exhaustion, and high efficiency considerations are the key issues for integrated solid oxide fuel cell (SOFC) systems during power step up transients, resulting in the relatively poor dynamic capabilities and make the transient load following very challenging and must be enhanced. To this end, this paper first focus on addressing the efficiency optimization associated with simultaneous power and thermal management of a 5-kW SOFC system. Particularly, a traverse optimization process including cubic convolution interpolation algorithm are proposed to obtain optimal operating points (OOPs) with the maximum efficiency. Then this paper investigate the current implications on system step-up transient performance, then a two stage pre-conditioning current strategy and a feedback power reference control scheme is proposed for load step-up transients to balance fast load following and fuel starvation, after that safe thermal transient is validated. Simulation results show the efficacy of the control design by demonstrating the fast load following ability while maintaining the safe operation, thus safe; efficient and fast load transition can be achieved

  5. Microsoft Office Word 2007 step by step

    CERN Document Server

    Cox, Joyce

    2007-01-01

    Experience learning made easy-and quickly teach yourself how to create impressive documents with Word 2007. With Step By Step, you set the pace-building and practicing the skills you need, just when you need them!Apply styles and themes to your document for a polished lookAdd graphics and text effects-and see a live previewOrganize information with new SmartArt diagrams and chartsInsert references, footnotes, indexes, a table of contentsSend documents for review and manage revisionsTurn your ideas into blogs, Web pages, and moreYour all-in-one learning experience includes:Files for building sk

  6. Step by Step Microsoft Office Visio 2003

    CERN Document Server

    Lemke, Judy

    2004-01-01

    Experience learning made easy-and quickly teach yourself how to use Visio 2003, the Microsoft Office business and technical diagramming program. With STEP BY STEP, you can take just the lessons you need, or work from cover to cover. Either way, you drive the instruction-building and practicing the skills you need, just when you need them! Produce computer network diagrams, organization charts, floor plans, and moreUse templates to create new diagrams and drawings quicklyAdd text, color, and 1-D and 2-D shapesInsert graphics and pictures, such as company logosConnect shapes to create a basic f

  7. A methodology for optimal sizing of autonomous hybrid PV/wind system

    International Nuclear Information System (INIS)

    Diaf, S.; Diaf, D.; Belhamel, M.; Haddadi, M.; Louche, A.

    2007-01-01

    The present paper presents a methodology to perform the optimal sizing of an autonomous hybrid PV/wind system. The methodology aims at finding the configuration, among a set of systems components, which meets the desired system reliability requirements, with the lowest value of levelized cost of energy. Modelling a hybrid PV/wind system is considered as the first step in the optimal sizing procedure. In this paper, more accurate mathematical models for characterizing PV module, wind generator and battery are proposed. The second step consists to optimize the sizing of a system according to the loss of power supply probability (LPSP) and the levelized cost of energy (LCE) concepts. Considering various types and capacities of system devices, the configurations, which can meet the desired system reliability, are obtained by changing the type and size of the devices systems. The configuration with the lowest LCE gives the optimal choice. Applying this method to an assumed PV/wind hybrid system to be installed at Corsica Island, the simulation results show that the optimal configuration, which meet the desired system reliability requirements (LPSP=0) with the lowest LCE, is obtained for a system comprising a 125 W photovoltaic module, one wind generator (600 W) and storage batteries (using 253 Ah). On the other hand, the device system choice plays an important role in cost reduction as well as in energy production

  8. A practical multiscale approach for optimization of structural damping

    DEFF Research Database (Denmark)

    Andreassen, Erik; Jensen, Jakob Søndergaard

    2016-01-01

    A simple and practical multiscale approach suitable for topology optimization of structural damping in a component ready for additive manufacturing is presented.The approach consists of two steps: First, the homogenized loss factor of a two-phase material is maximized. This is done in order...

  9. Stepped fans and facies-equivalent phyllosilicates in Coprates Catena, Mars

    Science.gov (United States)

    Grindrod, P. M.; Warner, N. H.; Hobley, D. E. J.; Schwartz, C.; Gupta, S.

    2018-06-01

    Stepped fan deposits and phyllosilicate mineralogies are relatively common features on Mars but have not previously been found in association with each other. Both of these features are widely accepted to be the result of aqueous processes, but the assumed role and nature of any water varies. In this study we have investigated two stepped fan deposits in Coprates Catena, Mars, which have a genetic link to light-toned material that is rich in Fe-Mg phyllosilicate phases. Although of different sizes and in separate, but adjacent, trough-like depressions, we identify similar features at these stepped fans and phyllosilicates that are indicative of similar formation conditions and processes. Our observations of the overall geomorphology, mineralogy and chronology of these features are consistent with a two stage formation process, whereby deposition in the troughs first occurs into shallow standing water or playas, forming fluvial or alluvial fans that terminate in delta deposits and interfinger with interpreted lacustrine facies, with a later period of deposition under sub-aerial conditions, forming alluvial fan deposits. We suggest that the distinctive stepped appearance of these fans is the result of aeolian erosion, and is not a primary depositional feature. This combined formation framework for stepped fans and phyllosilicates can also explain other similar features on Mars, and adds to the growing evidence of fluvial activity in the equatorial region of Mars during the Hesperian and Amazonian.

  10. Two-step design method for highly compact three-dimensional freeform optical system for LED surface light source.

    Science.gov (United States)

    Mao, Xianglong; Li, Hongtao; Han, Yanjun; Luo, Yi

    2014-10-20

    Designing an illumination system for a surface light source with a strict compactness requirement is quite challenging, especially for the general three-dimensional (3D) case. In accordance with the two key features of an expected illumination distribution, i.e., a well-controlled boundary and a precise illumination pattern, a two-step design method is proposed in this paper for highly compact 3D freeform illumination systems. In the first step, a target shape scaling strategy is combined with an iterative feedback modification algorithm to generate an optimized freeform optical system with a well-controlled boundary of the target distribution. In the second step, a set of selected radii of the system obtained in the first step are optimized to further improve the illuminating quality within the target region. The method is quite flexible and effective to design highly compact optical systems with almost no restriction on the shape of the desired target field. As examples, three highly compact freeform lenses with ratio of center height h of the lens and the maximum dimension D of the source ≤ 2.5:1 are designed for LED surface light sources to form a uniform illumination distribution on a rectangular, a cross-shaped and a complex cross pierced target plane respectively. High light control efficiency of η > 0.7 as well as low relative standard illumination deviation of RSD < 0.07 is obtained simultaneously for all the three design examples.

  11. Stepping out: dare to step forward, step back, or just stand still and breathe.

    Science.gov (United States)

    Waisman, Mary Sue

    2012-01-01

    It is important to step out and make a difference. We have one of the most unique and diverse professions that allows for diversity in thought and practice, permitting each of us to grow in our unique niches and make significant contributions. I was frightened to 'step out' to go to culinary school at the age of 46, but it changed forever the way I look at my profession and I have since experienced the most enjoyable and innovative career. There are also times when it is important to 'step back' to relish the roots of our profession; to help bring food back into nutrition; to translate all of our wonderful science into a language of food that Canadians understand. We all need to take time to 'just stand still and breathe': to celebrate our accomplishments, reflect on our actions, ensure we are heading toward our vision, keep the profession vibrant and relevant, and cherish one another.

  12. Tractable Pareto Optimization of Temporal Preferences

    Science.gov (United States)

    Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent

    2003-01-01

    This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.

  13. Novel Verification Method for Timing Optimization Based on DPSO

    Directory of Open Access Journals (Sweden)

    Chuandong Chen

    2018-01-01

    Full Text Available Timing optimization for logic circuits is one of the key steps in logic synthesis. Extant research data are mainly proposed based on various intelligence algorithms. Hence, they are neither comparable with timing optimization data collected by the mainstream electronic design automation (EDA tool nor able to verify the superiority of intelligence algorithms to the EDA tool in terms of optimization ability. To address these shortcomings, a novel verification method is proposed in this study. First, a discrete particle swarm optimization (DPSO algorithm was applied to optimize the timing of the mixed polarity Reed-Muller (MPRM logic circuit. Second, the Design Compiler (DC algorithm was used to optimize the timing of the same MPRM logic circuit through special settings and constraints. Finally, the timing optimization results of the two algorithms were compared based on MCNC benchmark circuits. The timing optimization results obtained using DPSO are compared with those obtained from DC, and DPSO demonstrates an average reduction of 9.7% in the timing delays of critical paths for a number of MCNC benchmark circuits. The proposed verification method directly ascertains whether the intelligence algorithm has a better timing optimization ability than DC.

  14. A methodology for modeling photocatalytic reactors for indoor pollution control using previously estimated kinetic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Passalia, Claudio; Alfano, Orlando M. [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina); Brandi, Rodolfo J., E-mail: rbrandi@santafe-conicet.gov.ar [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Indoor pollution control via photocatalytic reactors. Black-Right-Pointing-Pointer Scaling-up methodology based on previously determined mechanistic kinetics. Black-Right-Pointing-Pointer Radiation interchange model between catalytic walls using configuration factors. Black-Right-Pointing-Pointer Modeling and experimental validation of a complex geometry photocatalytic reactor. - Abstract: A methodology for modeling photocatalytic reactors for their application in indoor air pollution control is carried out. The methodology implies, firstly, the determination of intrinsic reaction kinetics for the removal of formaldehyde. This is achieved by means of a simple geometry, continuous reactor operating under kinetic control regime and steady state. The kinetic parameters were estimated from experimental data by means of a nonlinear optimization algorithm. The second step was the application of the obtained kinetic parameters to a very different photoreactor configuration. In this case, the reactor is a corrugated wall type using nanosize TiO{sub 2} as catalyst irradiated by UV lamps that provided a spatially uniform radiation field. The radiative transfer within the reactor was modeled through a superficial emission model for the lamps, the ray tracing method and the computation of view factors. The velocity and concentration fields were evaluated by means of a commercial CFD tool (Fluent 12) where the radiation model was introduced externally. The results of the model were compared experimentally in a corrugated wall, bench scale reactor constructed in the laboratory. The overall pollutant conversion showed good agreement between model predictions and experiments, with a root mean square error less than 4%.

  15. Application of enhanced electronegative multimodal chromatography as the primary capture step for immunoglobulin G purification.

    Science.gov (United States)

    Wang, Yanli; Chen, Quan; Xian, Mo; Nian, Rui; Xu, Fei

    2018-06-01

    In recent studies, electronegative multimodal chromatography with Eshmuno HCX was demonstrated to be a highly promising recovery step for direct immunoglobulin G (IgG) capture from undiluted cell culture fluid. In this study, the binding properties of HCX to IgG at different pH/salt combinations were systematically studied, and its purification performance was significantly enhanced by lowering the washing pH and conductivity after high capacity binding of IgG under its optimal conditions. A single polishing step gave an end-product with non-histone host cell protein (nh-HCP) below 1 ppm, DNA less than 1 ppb, which aggregates less than 0.5% and an overall IgG recovery of 86.2%. The whole non-affinity chromatography based two-column-step process supports direct feed loading without buffer adjustment, thus extraordinarily boosting the overall productivity and cost-savings.

  16. Optimal execution in high-frequency trading with Bayesian learning

    Science.gov (United States)

    Du, Bian; Zhu, Hongliang; Zhao, Jingdong

    2016-11-01

    We consider optimal trading strategies in which traders submit bid and ask quotes to maximize the expected quadratic utility of total terminal wealth in a limit order book. The trader's bid and ask quotes will be changed by the Poisson arrival of market orders. Meanwhile, the trader may update his estimate of other traders' target sizes and directions by Bayesian learning. The solution of optimal execution in the limit order book is a two-step procedure. First, we model an inactive trading with no limit order in the market. The dealer simply holds dollars and shares of stocks until terminal time. Second, he calibrates his bid and ask quotes to the limit order book. The optimal solutions are given by dynamic programming and in fact they are globally optimal. We also give numerical simulation to the value function and optimal quotes at the last part of the article.

  17. An Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading-order term consisting of an error density that is computable from symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading-error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations. The performance is illustrated by numerical tests.

  18. Implementing of the multi-objective particle swarm optimizer and fuzzy decision-maker in exergetic, exergoeconomic and environmental optimization of a benchmark cogeneration system

    International Nuclear Information System (INIS)

    Sayyaadi, Hoseyn; Babaie, Meisam; Farmani, Mohammad Reza

    2011-01-01

    Multi-objective optimization for design of a benchmark cogeneration system namely as the CGAM cogeneration system is performed. In optimization approach, Exergetic, Exergoeconomic and Environmental objectives are considered, simultaneously. In this regard, the set of Pareto optimal solutions known as the Pareto frontier is obtained using the MOPSO (multi-objective particle swarm optimizer). The exergetic efficiency as an exergetic objective is maximized while the unit cost of the system product and the cost of the environmental impact respectively as exergoeconomic and environmental objectives are minimized. Economic model which is utilized in the exergoeconomic analysis is built based on both simple model (used in original researches of the CGAM system) and the comprehensive modeling namely as TTR (total revenue requirement) method (used in sophisticated exergoeconomic analysis). Finally, a final optimal solution from optimal set of the Pareto frontier is selected using a fuzzy decision-making process based on the Bellman-Zadeh approach and results are compared with corresponding results obtained in a traditional decision-making process. Further, results are compared with the corresponding performance of the base case CGAM system and optimal designs of previous works and discussed. -- Highlights: → A multi-objective optimization approach has been implemented in optimization of a benchmark cogeneration system. → Objective functions based on the environmental impact evaluation, thermodynamic and economic analysis are obtained and optimized. → Particle swarm optimizer implemented and its robustness is compared with NSGA-II. → A final optimal configuration is found using various decision-making approaches. → Results compared with previous works in the field.

  19. Modulation of EMG-EMG Coherence in a Choice Stepping Task

    Directory of Open Access Journals (Sweden)

    Ippei Nojima

    2018-02-01

    Full Text Available The voluntary step execution task is a popular measure for identifying fall risks among elderly individuals in the community setting because most falls have been reported to occur during movement. However, the neurophysiological functions during this movement are not entirely understood. Here, we used electromyography (EMG to explore the relationship between EMG-EMG coherence, which reflects common oscillatory drive to motoneurons, and motor performance associated with stepping tasks: simple reaction time (SRT and choice reaction time (CRT tasks. Ten healthy elderly adults participated in the study. Participants took a single step forward in response to a visual imperative stimulus. EMG-EMG coherence was analyzed for 1000 ms before the presentation of the stimulus (stationary standing position from proximal and distal tibialis anterior (TA and soleus (SOL muscles. The main result showed that all paired EMG-EMG coherences in the alpha and beta frequency bands were greater in the SRT than the CRT task. This finding suggests that the common oscillatory drive to the motoneurons during the SRT task occurred prior to taking a step, whereas the lower value of corticospinal activity during the CRT task prior to taking a step may indicate an involvement of inhibitory activity, which is consistent with observations from our previous study (Watanabe et al., 2016. Furthermore, the beta band coherence in intramuscular TA tended to positively correlate with the number of performance errors that are associated with fall risks in the CRT task, suggesting that a reduction in the inhibitory activity may result in a decrease of stepping performance. These findings could advance the understanding of the neurophysiological features of postural adjustments in elderly individuals.

  20. [Exploration of one-step preparation of Ganoderma lucidum multicomponent microemulsion].

    Science.gov (United States)

    He, Jun-Jie; Chen, Yan; Du, Meng; Cao, Wei; Yuan, Ling; Zheng, Li-Yan

    2013-03-01

    To explore one-step method for the preparation of Ganoderma lucidum multicomponent microemulsion, according to the dissolution characteristics of triterpenes and polysaccharides in Ganoderma lucidum, formulation of the microemulsion was optimized. The optimal blank microemulsion was used as a solvent to sonicate the Ganoderma lucidum powder to prepare the multicomponent microemulsion, besides, its physicochemical properties were compared with the microemulsion made by conventional method. The results showed that the multicomponent microemulsion was characterized as (43.32 +/- 6.82) nm in size, 0.173 +/- 0.025 in polydispersity index (PDI) and -(3.98 +/- 0.82) mV in zeta potential. The contents of Ganoderma lucidum triterpenes and polysaccharides were (5.95 +/- 0.32) and (7.58 +/- 0.44) mg x mL(-1), respectively. Sonicating Ganoderma lucidum powder by blank microemulsion could prepare the multicomponent microemulsion. Compared with the conventional method, this method is simple and low cost, which is suitable for industrial production.