WorldWideScience

Sample records for optimize analysis effort

  1. Theoretical Aspects Regarding the Optimal Taxation of Effort With More Conditions

    Directory of Open Access Journals (Sweden)

    Prof. Dumitru MARIN PhD

    2017-09-01

    Full Text Available In this article, the authors propose to conduct a pertinent analysis of the use of mathematical models in relation to optimal effort taxation. The effort taxation must be a balance between allocative efficiency and distribution. It is a question of determining the optimal tax level for three conditions. The theory is that the shift to a fixed amount, a single share, is not feasible for wage compensation, being important to note that this condition is one that must be considered. The mathematical model proposed by the authors regarding optimal taxation of multi-condition effort is relevant and suggestive for the analysis. It is well known that, in order to increase tax revenue, they must inevitably be based on the unobservable variant, which is the potential gain or, most importantly, on the individual productivity of labor. Solving a problem of this sensibility, such as the optimal taxation of multi-state effort, needs to be analyzed and conditioned by the establishment of a mathematical-econometric model that highlights both the aspects of a minimum expectation threshold or a situation that depends on the economic outcome. The authors consider the existing variants and establish variants based on a mathematical model that is analyzed to best answer the problem of the adverse selection ie the balance between allocation efficiency and distribution. We analyze the optimization problem from Lagrange’s function and associated multiplier, highlighting the mathematical functions that can be used. Further, the authors consider the case of adverse selection based on asymmetric information. For the objective function and budget constraint, the authors consider that they need to add incentive restrictions, especially when they are taxed on income. By analyzing this hypothesis in depth, the authors propose and demonstrate a mathematical function that best suits these adjustment needs and solve the adverse selection. The way in which the authors suggest

  2. Optimal Effort in Consumer Choice : Theory and Experimental Evidence for Binary Choice

    NARCIS (Netherlands)

    Conlon, B.J.; Dellaert, B.G.C.; van Soest, A.H.O.

    2001-01-01

    This paper develops a theoretical model of optimal effort in consumer choice.The model extends previous consumer choice models in that the consumer not only chooses a product, but also decides how much effort to apply to a given choice problem.The model yields a unique optimal level of effort, which

  3. Estimation of total Effort and Effort Elapsed in Each Step of Software Development Using Optimal Bayesian Belief Network

    Directory of Open Access Journals (Sweden)

    Fatemeh Zare Baghiabad

    2017-09-01

    Full Text Available Accuracy in estimating the needed effort for software development caused software effort estimation to be a challenging issue. Beside estimation of total effort, determining the effort elapsed in each software development step is very important because any mistakes in enterprise resource planning can lead to project failure. In this paper, a Bayesian belief network was proposed based on effective components and software development process. In this model, the feedback loops are considered between development steps provided that the return rates are different for each project. Different return rates help us determine the percentages of the elapsed effort in each software development step, distinctively. Moreover, the error measurement resulted from optimized effort estimation and the optimal coefficients to modify the model are sought. The results of the comparison between the proposed model and other models showed that the model has the capability to highly accurately estimate the total effort (with the marginal error of about 0.114 and to estimate the effort elapsed in each software development step.

  4. Useful Method To Optimize The Rehabilitation Effort At A SCI Rehabilitation Centre

    DEFF Research Database (Denmark)

    Steensgaard, Randi; Dahl Hoffmann, Dorte

    “Useful Method To Optimize The Rehabilitation Effort At A SCI Rehabilitation Centre” The Nordic Spinal Cord Society (NoSCoS) Meeting, Trondheim......“Useful Method To Optimize The Rehabilitation Effort At A SCI Rehabilitation Centre” The Nordic Spinal Cord Society (NoSCoS) Meeting, Trondheim...

  5. Optimization of TTEthernet Networks to Support Best-Effort Traffic

    DEFF Research Database (Denmark)

    Tamas-Selicean, Domitian; Pop, Paul

    2014-01-01

    This paper focuses on the optimization of the TTEthernet communication protocol, which offers three traffic classes: time-triggered (TT), sent according to static schedules, rate-constrained (RC) that has bounded end-to-end latency, and best-effort (BE), the classic Ethernet traffic, with no timing...... guarantees. In our earlier work we have proposed an optimization approach named DOTTS that performs the routing, scheduling and packing / fragmenting of TT and RC messages, such that the TT and RC traffic is schedulable. Although backwards compatibility with classic Ethernet networks is one of TTEthernet...

  6. Optimal pricing and promotional effort control policies for a new product growth in segmented market

    Directory of Open Access Journals (Sweden)

    Jha P.C.

    2015-01-01

    Full Text Available Market segmentation enables the marketers to understand and serve the customers more effectively thereby improving company’s competitive position. In this paper, we study the impact of price and promotion efforts on evolution of sales intensity in segmented market to obtain the optimal price and promotion effort policies. Evolution of sales rate for each segment is developed under the assumption that marketer may choose both differentiated as well as mass market promotion effort to influence the uncaptured market potential. An optimal control model is formulated and a solution method using Maximum Principle has been discussed. The model is extended to incorporate budget constraint. Model applicability is illustrated by a numerical example. P.C. Jha, P. Manik, K. Chaudhary, R. Cambini / Optimal Pricing and Promotional 2 Since the discrete time data is available, the formulated model is discretized. For solving the discrete model, differential evolution algorithm is used.

  7. Optimizing and joining future safeguards efforts by 'remote inspections'

    International Nuclear Information System (INIS)

    Zendel, M.; Khlebnikov, N.

    2009-01-01

    Full-text: Remote inspections have a large potential to save inspection effort in future routine safeguards implementation. Such inspections involve remote activities based on the analysis of data acquired in the field without the physical presence of an inspector, shifting the inspectors' priorities further toward unannounced inspections, complementary access activities and data evaluation. Large, automated and complex facilities require facility resident and specific safeguards equipment systems with features for unattended and remotely controlled operation as well as being integrated in the nuclear process. In many instances the use of such equipment jointly with the SSAC/RSAC and the operator is foreseen to achieve affordable effectiveness with a minimum level of intrusiveness to the facility operation. Where it becomes possible to achieve independent conclusions by this approach, the IAEA would make full use of the SSAC/RSAC, involving State inspectors and/or facility operators to operate inspection systems under remotely controlled IAEA mechanisms. These mechanisms would include documented procedures for routine joint-use, defining arrangements for data sharing, physical security and authentication mechanisms, recalibration and use of standards and software, maintenance, repair, storage and transportation. The level of cooperation and willingness of a State to implement such measures requested and properly justified by the IAEA will demonstrate its commitment to full transparency in its nuclear activities. Examples of existing remote inspection activities, including joint-use activities will be discussed. The future potential of remote inspections will be assessed considering technical developments and increased needs for process monitoring. Enhanced cooperation with SSAC/RSAC within the framework of remote inspections could further optimize the IAEA's inspection efforts while at the same time maintaining effective safeguards implementation. (author)

  8. Rotor design optimization using a free wake analysis

    Science.gov (United States)

    Quackenbush, Todd R.; Boschitsch, Alexander H.; Wachspress, Daniel A.; Chua, Kiat

    1993-01-01

    The aim of this effort was to develop a comprehensive performance optimization capability for tiltrotor and helicopter blades. The analysis incorporates the validated EHPIC (Evaluation of Hover Performance using Influence Coefficients) model of helicopter rotor aerodynamics within a general linear/quadratic programming algorithm that allows optimization using a variety of objective functions involving the performance. The resulting computer code, EHPIC/HERO (HElicopter Rotor Optimization), improves upon several features of the previous EHPIC performance model and allows optimization utilizing a wide spectrum of design variables, including twist, chord, anhedral, and sweep. The new analysis supports optimization of a variety of objective functions, including weighted measures of rotor thrust, power, and propulsive efficiency. The fundamental strength of the approach is that an efficient search for improved versions of the baseline design can be carried out while retaining the demonstrated accuracy inherent in the EHPIC free wake/vortex lattice performance analysis. Sample problems are described that demonstrate the success of this approach for several representative rotor configurations in hover and axial flight. Features that were introduced to convert earlier demonstration versions of this analysis into a generally applicable tool for researchers and designers is also discussed.

  9. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    Science.gov (United States)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  10. Analysis of dismantling possibility and unloading efforts of fuel assemblies from core of WWER

    International Nuclear Information System (INIS)

    Danilov, V.; Dobrov, V.; Semishkin, V.; Vasilchenko, I.

    2006-01-01

    The computation methods of optimal dismantling sequence of fuel assemblies (FA) from core of WWER after different operating periods and accident conditions are considered. The algorithms of fuel dismantling sequence are constructed both on the basis of analysis of mutual spacer grid overlaps of adjacent fuel assemblies and numerical structure analysis of efforts required for FA removal as FA heaving from the core. Computation results for core dismantling sequence after 3-year operating period and LB LOCA are presented in the paper

  11. Inpo/industry job and task analysis efforts

    International Nuclear Information System (INIS)

    Wigley, W.W.

    1985-01-01

    One of the goals of INPO is to develop and coordinate industrywide programs to improve the education, training and qualification of nuclear utility personnel. To accomplish this goal, INPO's Training and Education Division: conducts periodic evaluations of industry training programs; provides assistance to the industry in developing training programs; manages the accreditation of utility training programs. These efforts are aimed at satisfying the need for training programs for nuclear utility personnel to be performance-based. Performance-based means that training programs provide an incumbent with the skills and knowledge required to safely perform the job. One of the ways that INPO has provided assistance to the industry is through the industrywide job and task analysis effort. I will discuss the job analysis and task analysis processes, the current status of JTA efforts, JTA products and JTA lessons learned

  12. Development of an optimized procedure bridging design and structural analysis codes for the automatized design of the SMART

    International Nuclear Information System (INIS)

    Kim, Tae Wan; Park, Keun Bae; Choi, Suhn; Kim, Kang Soo; Jeong, Kyeong Hoon; Lee, Gyu Mahn

    1998-09-01

    In this report, an optimized design and analysis procedure is established to apply to the SMART (System-integrated Modular Advanced ReacTor) development. The development of an optimized procedure is to minimize the time consumption and engineering effort by squeezing the design and feedback interactions. To achieve this goal, the data and information generated through the design development should be directly transferred to the analysis program with minimum operation. The verification of the design concept requires considerable effort since the communication between the design and analysis involves time consuming stage for the conversion of input information. In this report, an optimized procedure is established bridging the design and analysis stage utilizing the IDEAS, ABAQUS and ANSYS. (author). 3 refs., 2 tabs., 5 figs

  13. When is a species declining? Optimizing survey effort to detect population changes in reptiles.

    Directory of Open Access Journals (Sweden)

    David Sewell

    Full Text Available Biodiversity monitoring programs need to be designed so that population changes can be detected reliably. This can be problematical for species that are cryptic and have imperfect detection. We used occupancy modeling and power analysis to optimize the survey design for reptile monitoring programs in the UK. Surveys were carried out six times a year in 2009-2010 at multiple sites. Four out of the six species--grass snake, adder, common lizard, slow-worm -were encountered during every survey from March-September. The exceptions were the two rarest species--sand lizard and smooth snake--which were not encountered in July 2009 and March 2010 respectively. The most frequently encountered and most easily detected species was the slow-worm. For the four widespread reptile species in the UK, three to four survey visits that used a combination of directed transect walks and artificial cover objects resulted in 95% certainty that a species would be detected if present. Using artificial cover objects was an effective detection method for most species, considerably increased the detection rate of some, and reduced misidentifications. To achieve an 85% power to detect a decline in any of the four widespread species when the true decline is 15%, three surveys at a total of 886 sampling sites, or four surveys at a total of 688 sites would be required. The sampling effort needed reduces to 212 sites surveyed three times, or 167 sites surveyed four times, if the target is to detect a true decline of 30% with the same power. The results obtained can be used to refine reptile survey protocols in the UK and elsewhere. On a wider scale, the occupancy study design approach can be used to optimize survey effort and help set targets for conservation outcomes for regional or national biodiversity assessments.

  14. Initiating statistical maintenance optimization

    International Nuclear Information System (INIS)

    Doyle, E. Kevin; Tuomi, Vesa; Rowley, Ian

    2007-01-01

    Since the 1980 s maintenance optimization has been centered around various formulations of Reliability Centered Maintenance (RCM). Several such optimization techniques have been implemented at the Bruce Nuclear Station. Further cost refinement of the Station preventive maintenance strategy includes evaluation of statistical optimization techniques. A review of successful pilot efforts in this direction is provided as well as initial work with graphical analysis. The present situation reguarding data sourcing, the principle impediment to use of stochastic methods in previous years, is discussed. The use of Crowe/AMSAA (Army Materials Systems Analysis Activity) plots is demonstrated from the point of view of justifying expenditures in optimization efforts. (author)

  15. APPLYING TEACHING-LEARNING TO ARTIFICIAL BEE COLONY FOR PARAMETER OPTIMIZATION OF SOFTWARE EFFORT ESTIMATION MODEL

    Directory of Open Access Journals (Sweden)

    THANH TUNG KHUAT

    2017-05-01

    Full Text Available Artificial Bee Colony inspired by the foraging behaviour of honey bees is a novel meta-heuristic optimization algorithm in the community of swarm intelligence algorithms. Nevertheless, it is still insufficient in the speed of convergence and the quality of solutions. This paper proposes an approach in order to tackle these downsides by combining the positive aspects of TeachingLearning based optimization and Artificial Bee Colony. The performance of the proposed method is assessed on the software effort estimation problem, which is the complex and important issue in the project management. Software developers often carry out the software estimation in the early stages of the software development life cycle to derive the required cost and schedule for a project. There are a large number of methods for effort estimation in which COCOMO II is one of the most widely used models. However, this model has some restricts because its parameters have not been optimized yet. In this work, therefore, we will present the approach to overcome this limitation of COCOMO II model. The experiments have been conducted on NASA software project dataset and the obtained results indicated that the improvement of parameters provided better estimation capabilities compared to the original COCOMO II model.

  16. Optimizing Federal Fleet Vehicle Acquisitions: An Eleven-Agency FY 2012 Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Singer, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Daley, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-02-01

    This report focuses on the National Renewable Energy Laboratory's (NREL) fiscal year (FY) 2012 effort that used the NREL Optimal Vehicle Acquisition (NOVA) analysis to identify optimal vehicle acquisition recommendations for eleven diverse federal agencies. Results of the study show that by following a vehicle acquisition plan that maximizes the reduction in greenhouse gas (GHG) emissions, significant progress is also made toward the mandated complementary goals of acquiring alternative fuel vehicles, petroleum use reduction, and alternative fuel use increase.

  17. Analysis Efforts Supporting NSTX Upgrades

    International Nuclear Information System (INIS)

    Zhang, H.; Titus, P.; Rogoff, P.; Zolfaghari, A.; Mangra, D.; Smith, M.

    2010-01-01

    The National Spherical Torus Experiment (NSTX) is a low aspect ratio, spherical torus (ST) configuration device which is located at Princeton Plasma Physics Laboratory (PPPL) This device is presently being updated to enhance its physics by doubling the TF field to 1 Tesla and increasing the plasma current to 2 Mega-amperes. The upgrades include a replacement of the centerstack and addition of a second neutral beam. The upgrade analyses have two missions. The first is to support design of new components, principally the centerstack, the second is to qualify existing NSTX components for higher loads, which will increase by a factor of four. Cost efficiency was a design goal for new equipment qualification, and reanalysis of the existing components. Showing that older components can sustain the increased loads has been a challenging effort in which designs had to be developed that would limit loading on weaker components, and would minimize the extent of modifications needed. Two areas representing this effort have been chosen to describe in more details: analysis of the current distribution in the new TF inner legs, and, second, analysis of the out-of-plane support of the existing TF outer legs.

  18. Listening Effort With Cochlear Implant Simulations

    NARCIS (Netherlands)

    Pals, Carina; Sarampalis, Anastasios; Başkent, Deniz

    2013-01-01

    Purpose: Fitting a cochlear implant (CI) for optimal speech perception does not necessarily optimize listening effort. This study aimed to show that listening effort may change between CI processing conditions for which speech intelligibility remains constant. Method: Nineteen normal-hearing

  19. Optimal Testing Effort Control for Modular Software System Incorporating The Concept of Independent and Dependent Faults: A Control Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Kuldeep CHAUDHARY

    2012-07-01

    Full Text Available In this paper, we discuss modular software system for Software Reliability GrowthModels using testing effort and study the optimal testing effort intensity for each module. The maingoal is to minimize the cost of software development when budget constraint on testing expenditureis given. We discuss the evolution of faults removal dynamics in incorporating the idea of leading/independent and dependent faults in modular software system under the assumption that testing ofeach of the modulus is done independently. The problem is formulated as an optimal controlproblem and the solution to the proposed problem has been obtained by using Pontryagin MaximumPrinciple.

  20. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    Science.gov (United States)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  1. Summary of process research analysis efforts

    Science.gov (United States)

    Burger, D. R.

    1985-01-01

    A summary of solar-cell process research analysis efforts was presented. Process design and cell design are interactive efforts where technology from integrated circuit processes and other processes are blended. The primary factors that control cell efficiency are: (1) the bulk parameters of the available sheet material, (2) the retention and enhancement of these bulk parameters, and (3) the cell design and the cost to produce versus the finished cells performance. The process sequences need to be tailored to be compatible with the sheet form, the cell shape form, and the processing equipment. New process options that require further evaluation and utilization are lasers, robotics, thermal pulse techniques, and new materials. There are numerous process control techniques that can be adapted and used that will improve product uniformity and reduced costs. Two factors that can lead to longer life modules are the use of solar cell diffusion barriers and improved encapsulation.

  2. Maintenance optimization after RCM

    International Nuclear Information System (INIS)

    Doyle, E.K.; Lee, C.-G.; Cho, D.

    2005-01-01

    Variant forms of RCM (Reliability Centered Maintenance) have been the maintenance optimizing tools of choice in industry for the last 20 years. Several such optimization techniques have been implemented at the Bruce Nuclear Station. Further cost refinement of the Station preventive maintenance strategy whereby decisions are based on statistical analysis of historical failure data are now being evaluated. The evaluation includes a requirement to demonstrate that earlier optimization projects have long term positive impacts. This proved to be a significant challenge. Eventually a methodology was developed using Crowe/AMSAA (Army Materials Systems Analysis Activity) plots to justify expenditures on further optimization efforts. (authors)

  3. Approximate Reanalysis in Topology Optimization

    DEFF Research Database (Denmark)

    Amir, Oded; Bendsøe, Martin P.; Sigmund, Ole

    2009-01-01

    In the nested approach to structural optimization, most of the computational effort is invested in the solution of the finite element analysis equations. In this study, the integration of an approximate reanalysis procedure into the framework of topology optimization of continuum structures...

  4. EABOT - Energetic analysis as a basis for robust optimization of trigeneration systems by linear programming

    International Nuclear Information System (INIS)

    Piacentino, A.; Cardona, F.

    2008-01-01

    The optimization of synthesis, design and operation in trigeneration systems for building applications is a quite complex task, due to the high number of decision variables, the presence of irregular heat, cooling and electric load profiles and the variable electricity price. Consequently, computer-aided techniques are usually adopted to achieve the optimal solution, based either on iterative techniques, linear or non-linear programming or evolutionary search. Large efforts have been made in improving algorithm efficiency, which have resulted in an increasingly rapid convergence to the optimal solution and in reduced calculation time; robust algorithm have also been formulated, assuming stochastic behaviour for energy loads and prices. This paper is based on the assumption that margins for improvements in the optimization of trigeneration systems still exist, which require an in-depth understanding of plant's energetic behaviour. Robustness in the optimization of trigeneration systems has more to do with a 'correct and comprehensive' than with an 'efficient' modelling, being larger efforts required to energy specialists rather than to experts in efficient algorithms. With reference to a mixed integer linear programming model implemented in MatLab for a trigeneration system including a pressurized (medium temperature) heat storage, the relevant contribute of thermoeconomics and energo-environmental analysis in the phase of mathematical modelling and code testing are shown

  5. A traditional and a less-invasive robust design: choices in optimizing effort allocation for seabird population studies

    Science.gov (United States)

    Converse, S.J.; Kendall, W.L.; Doherty, P.F.; Naughton, M.B.; Hines, J.E.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    For many animal populations, one or more life stages are not accessible to sampling, and therefore an unobservable state is created. For colonially-breeding populations, this unobservable state could represent the subset of adult breeders that have foregone breeding in a given year. This situation applies to many seabird populations, notably albatrosses, where skipped breeders are either absent from the colony, or are present but difficult to capture or correctly assign to breeding state. Kendall et al. have proposed design strategies for investigations of seabird demography where such temporary emigration occurs, suggesting the use of the robust design to permit the estimation of time-dependent parameters and to increase the precision of estimates from multi-state models. A traditional robust design, where animals are subject to capture multiple times in a sampling season, is feasible in many cases. However, due to concerns that multiple captures per season could cause undue disturbance to animals, Kendall et al. developed a less-invasive robust design (LIRD), where initial captures are followed by an assessment of the ratio of marked-to-unmarked birds in the population or sampled plot. This approach has recently been applied in the Northwestern Hawaiian Islands to populations of Laysan (Phoebastria immutabilis) and black-footed (P. nigripes) albatrosses. In this paper, we outline the LIRD and its application to seabird population studies. We then describe an approach to determining optimal allocation of sampling effort in which we consider a non-robust design option (nRD), and variations of both the traditional robust design (RD), and the LIRD. Variations we considered included the number of secondary sampling occasions for the RD and the amount of total effort allocated to the marked-to-unmarked ratio assessment for the LIRD. We used simulations, informed by early data from the Hawaiian study, to address optimal study design for our example cases. We found that

  6. Automated Multivariate Optimization Tool for Energy Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, P. G.; Griffith, B. T.; Long, N.; Torcellini, P. A.; Crawley, D.

    2006-07-01

    Building energy simulations are often used for trial-and-error evaluation of ''what-if'' options in building design--a limited search for an optimal solution, or ''optimization''. Computerized searching has the potential to automate the input and output, evaluate many options, and perform enough simulations to account for the complex interactions among combinations of options. This paper describes ongoing efforts to develop such a tool. The optimization tool employs multiple modules, including a graphical user interface, a database, a preprocessor, the EnergyPlus simulation engine, an optimization engine, and a simulation run manager. Each module is described and the overall application architecture is summarized.

  7. New Mexico district work-effort analysis computer program

    Science.gov (United States)

    Hiss, W.L.; Trantolo, A.P.; Sparks, J.L.

    1972-01-01

    The computer program (CAN 2) described in this report is one of several related programs used in the New Mexico District cost-analysis system. The work-effort information used in these programs is accumulated and entered to the nearest hour on forms completed by each employee. Tabulating cards are punched directly from these forms after visual examinations for errors are made. Reports containing detailed work-effort data itemized by employee within each project and account and by account and project for each employee are prepared for both current-month and year-to-date periods by the CAN 2 computer program. An option allowing preparation of reports for a specified 3-month period is provided. The total number of hours worked on each account and project and a grand total of hours worked in the New Mexico District is computed and presented in a summary report for each period. Work effort not chargeable directly to individual projects or accounts is considered as overhead and can be apportioned to the individual accounts and projects on the basis of the ratio of the total hours of work effort for the individual accounts or projects to the total New Mexico District work effort at the option of the user. The hours of work performed by a particular section, such as General Investigations or Surface Water, are prorated and charged to the projects or accounts within the particular section. A number of surveillance or buffer accounts are employed to account for the hours worked on special events or on those parts of large projects or accounts that require a more detailed analysis. Any part of the New Mexico District operation can be separated and analyzed in detail by establishing an appropriate buffer account. With the exception of statements associated with word size, the computer program is written in FORTRAN IV in a relatively low and standard language level to facilitate its use on different digital computers. The program has been run only on a Control Data Corporation

  8. TRU Waste Management Program. Cost/schedule optimization analysis

    International Nuclear Information System (INIS)

    Detamore, J.A.; Raudenbush, M.H.; Wolaver, R.W.; Hastings, G.A.

    1985-10-01

    This Current Year Work Plan presents in detail a description of the activities to be performed by the Joint Integration Office Rockwell International (JIO/RI) during FY86. It breaks down the activities into two major work areas: Program Management and Program Analysis. Program Management is performed by the JIO/RI by providing technical planning and guidance for the development of advanced TRU waste management capabilities. This includes equipment/facility design, engineering, construction, and operations. These functions are integrated to allow transition from interim storage to final disposition. JIO/RI tasks include program requirements identification, long-range technical planning, budget development, program planning document preparation, task guidance development, task monitoring, task progress information gathering and reporting to DOE, interfacing with other agencies and DOE lead programs, integrating public involvement with program efforts, and preparation of reports for DOE detailing program status. Program Analysis is performed by the JIO/RI to support identification and assessment of alternatives, and development of long-term TRU waste program capabilities. These analyses include short-term analyses in response to DOE information requests, along with performing an RH Cost/Schedule Optimization report. Systems models will be developed, updated, and upgraded as needed to enhance JIO/RI's capability to evaluate the adequacy of program efforts in various fields. A TRU program data base will be maintained and updated to provide DOE with timely responses to inventory related questions

  9. Aero-Acoustic-Structural Optimization Analysis and Testing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal effort is concerned with the development of a novel multidisciplinary optimization scheme and computer software for the effective design of advanced...

  10. Incentive Design and Mis-Allocated Effort

    OpenAIRE

    Schnedler, Wendelin

    2013-01-01

    Incentives often distort behavior: they induce agents to exert effort but this effort is not employed optimally. This paper proposes a theory of incentive design allowing for such distorted behavior. At the heart of the theory is a trade-off between getting the agent to exert effort and ensuring that this effort is used well. The theory covers various moral-hazard models, ranging from traditional single-task to multi-task models. It also provides -for the first time- a formalization and proof...

  11. The cost of model reference adaptive control - Analysis, experiments, and optimization

    Science.gov (United States)

    Messer, R. S.; Haftka, R. T.; Cudney, H. H.

    1993-01-01

    In this paper the performance of Model Reference Adaptive Control (MRAC) is studied in numerical simulations and verified experimentally with the objective of understanding how differences between the plant and the reference model affect the control effort. MRAC is applied analytically and experimentally to a single degree of freedom system and analytically to a MIMO system with controlled differences between the model and the plant. It is shown that the control effort is sensitive to differences between the plant and the reference model. The effects of increased damping in the reference model are considered, and it is shown that requiring the controller to provide increased damping actually decreases the required control effort when differences between the plant and reference model exist. This result is useful because one of the first attempts to counteract the increased control effort due to differences between the plant and reference model might be to require less damping, however, this would actually increase the control effort. Optimization of weighting matrices is shown to help reduce the increase in required control effort. However, it was found that eventually the optimization resulted in a design that required an extremely high sampling rate for successful realization.

  12. Ground Vehicle System Integration (GVSI) and Design Optimization Model

    National Research Council Canada - National Science Library

    Horton, William

    1996-01-01

    This report documents the Ground Vehicle System Integration (GVSI) and Design Optimization Model GVSI is a top-level analysis tool designed to support engineering tradeoff studies and vehicle design optimization efforts...

  13. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  14. Optimizing protection efforts for amphibian conservation in Mediterranean landscapes

    Science.gov (United States)

    García-Muñoz, Enrique; Ceacero, Francisco; Carretero, Miguel A.; Pedrajas-Pulido, Luis; Parra, Gema; Guerrero, Francisco

    2013-05-01

    Amphibians epitomize the modern biodiversity crisis, and attract great attention from the scientific community since a complex puzzle of factors has influence on their disappearance. However, these factors are multiple and spatially variable, and declining in each locality is due to a particular combination of causes. This study shows a suitable statistical procedure to determine threats to amphibian species in medium size administrative areas. For our study case, ten biological and ecological variables feasible to affect the survival of 15 amphibian species were categorized and reduced through Principal Component Analysis. The principal components extracted were related to ecological plasticity, reproductive potential, and specificity of breeding habitats. Finally, the factor scores of species were joined in a presence-absence matrix that gives us information to identify where and why conservation management are requires. In summary, this methodology provides the necessary information to maximize benefits of conservation measures in small areas by identifying which ecological factors need management efforts and where should we focus them on.

  15. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    , cache behavior could only be measured reliably in the ag- gregate across tens or hundreds of thousands of instructions. With the newest iteration of PEBS technology, cache events can be tied to a tuple of instruction pointer, target address (for both loads and stores), memory hierarchy, and observed latency. With this information we can now begin asking questions regarding the efficiency of not only regions of code, but how these regions interact with particular data structures and how these interactions evolve over time. In the short term, this information will be vital for performance analysts understanding and optimizing the behavior of their codes for the memory hierarchy. In the future, we can begin to ask how data layouts might be changed to improve performance and, for a particular application, what the theoretical optimal performance might be. The overall benefit to be produced by this effort was a commercial quality easy-to- use and scalable performance tool that will allow both beginner and experienced parallel programmers to automatically tune their applications for optimal cache usage. Effective use of such a tool can literally save weeks of performance tuning effort. Easy to use. With the proposed innovations, finding and fixing memory performance issues would be more automated and hide most to all of the performance engineer exper- tise ”under the hood” of the Open|SpeedShop performance tool. One of the biggest public benefits from the proposed innovations is that it makes performance analysis more usable to a larger group of application developers. Intuitive reporting of results. The Open|SpeedShop performance analysis tool has a rich set of intuitive, yet detailed reports for presenting performance results to application developers. Our goal was to leverage this existing technology to present the results from our memory performance addition to Open|SpeedShop. Suitable for experts as well as novices. Application performance is getting more difficult

  16. A model of reward- and effort-based optimal decision making and motor control.

    Directory of Open Access Journals (Sweden)

    Lionel Rigoux

    Full Text Available Costs (e.g. energetic expenditure and benefits (e.g. food are central determinants of behavior. In ecology and economics, they are combined to form a utility function which is maximized to guide choices. This principle is widely used in neuroscience as a normative model of decision and action, but current versions of this model fail to consider how decisions are actually converted into actions (i.e. the formation of trajectories. Here, we describe an approach where decision making and motor control are optimal, iterative processes derived from the maximization of the discounted, weighted difference between expected rewards and foreseeable motor efforts. The model accounts for decision making in cost/benefit situations, and detailed characteristics of control and goal tracking in realistic motor tasks. As a normative construction, the model is relevant to address the neural bases and pathological aspects of decision making and motor control.

  17. PRODUCT OPTIMIZATION METHOD BASED ON ANALYSIS OF OPTIMAL VALUES OF THEIR CHARACTERISTICS

    Directory of Open Access Journals (Sweden)

    Constantin D. STANESCU

    2016-05-01

    Full Text Available The paper presents an original method of optimizing products based on the analysis of optimal values of their characteristics . Optimization method comprises statistical model and analytical model . With this original method can easily and quickly obtain optimal product or material .

  18. Global optimization and sensitivity analysis

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1990-01-01

    A new direction for the analysis of nonlinear models of nuclear systems is suggested to overcome fundamental limitations of sensitivity analysis and optimization methods currently prevalent in nuclear engineering usage. This direction is toward a global analysis of the behavior of the respective system as its design parameters are allowed to vary over their respective design ranges. Presented is a methodology for global analysis that unifies and extends the current scopes of sensitivity analysis and optimization by identifying all the critical points (maxima, minima) and solution bifurcation points together with corresponding sensitivities at any design point of interest. The potential applicability of this methodology is illustrated with test problems involving multiple critical points and bifurcations and comprising both equality and inequality constraints

  19. Phase transitions in least-effort communications

    International Nuclear Information System (INIS)

    Prokopenko, Mikhail; Ay, Nihat; Obst, Oliver; Polani, Daniel

    2010-01-01

    We critically examine a model that attempts to explain the emergence of power laws (e.g., Zipf's law) in human language. The model is based on the principle of least effort in communications—specifically, the overall effort is balanced between the speaker effort and listener effort, with some trade-off. It has been shown that an information-theoretic interpretation of this principle is sufficiently rich to explain the emergence of Zipf's law in the vicinity of the transition between referentially useless systems (one signal for all referable objects) and indexical reference systems (one signal per object). The phase transition is defined in the space of communication accuracy (information content) expressed in terms of the trade-off parameter. Our study explicitly solves the continuous optimization problem, subsuming a recent, more specific result obtained within a discrete space. The obtained results contrast Zipf's law found by heuristic search (that attained only local minima) in the vicinity of the transition between referentially useless systems and indexical reference systems, with an inverse-factorial (sub-logarithmic) law found at the transition that corresponds to global minima. The inverse-factorial law is observed to be the most representative frequency distribution among optimal solutions

  20. Dynameomics: a multi-dimensional analysis-optimized database for dynamic protein data.

    Science.gov (United States)

    Kehl, Catherine; Simms, Andrew M; Toofanny, Rudesh D; Daggett, Valerie

    2008-06-01

    The Dynameomics project is our effort to characterize the native-state dynamics and folding/unfolding pathways of representatives of all known protein folds by way of molecular dynamics simulations, as described by Beck et al. (in Protein Eng. Des. Select., the first paper in this series). The data produced by these simulations are highly multidimensional in structure and multi-terabytes in size. Both of these features present significant challenges for storage, retrieval and analysis. For optimal data modeling and flexibility, we needed a platform that supported both multidimensional indices and hierarchical relationships between related types of data and that could be integrated within our data warehouse, as described in the accompanying paper directly preceding this one. For these reasons, we have chosen On-line Analytical Processing (OLAP), a multi-dimensional analysis optimized database, as an analytical platform for these data. OLAP is a mature technology in the financial sector, but it has not been used extensively for scientific analysis. Our project is further more unusual for its focus on the multidimensional and analytical capabilities of OLAP rather than its aggregation capacities. The dimensional data model and hierarchies are very flexible. The query language is concise for complex analysis and rapid data retrieval. OLAP shows great promise for the dynamic protein analysis for bioengineering and biomedical applications. In addition, OLAP may have similar potential for other scientific and engineering applications involving large and complex datasets.

  1. Conference on Convex Analysis and Global Optimization

    CERN Document Server

    Pardalos, Panos

    2001-01-01

    There has been much recent progress in global optimization algo­ rithms for nonconvex continuous and discrete problems from both a theoretical and a practical perspective. Convex analysis plays a fun­ damental role in the analysis and development of global optimization algorithms. This is due essentially to the fact that virtually all noncon­ vex optimization problems can be described using differences of convex functions and differences of convex sets. A conference on Convex Analysis and Global Optimization was held during June 5 -9, 2000 at Pythagorion, Samos, Greece. The conference was honoring the memory of C. Caratheodory (1873-1950) and was en­ dorsed by the Mathematical Programming Society (MPS) and by the Society for Industrial and Applied Mathematics (SIAM) Activity Group in Optimization. The conference was sponsored by the European Union (through the EPEAEK program), the Department of Mathematics of the Aegean University and the Center for Applied Optimization of the University of Florida, by th...

  2. Pareto optimality in organelle energy metabolism analysis.

    Science.gov (United States)

    Angione, Claudio; Carapezza, Giovanni; Costanza, Jole; Lió, Pietro; Nicosia, Giuseppe

    2013-01-01

    In low and high eukaryotes, energy is collected or transformed in compartments, the organelles. The rich variety of size, characteristics, and density of the organelles makes it difficult to build a general picture. In this paper, we make use of the Pareto-front analysis to investigate the optimization of energy metabolism in mitochondria and chloroplasts. Using the Pareto optimality principle, we compare models of organelle metabolism on the basis of single- and multiobjective optimization, approximation techniques (the Bayesian Automatic Relevance Determination), robustness, and pathway sensitivity analysis. Finally, we report the first analysis of the metabolic model for the hydrogenosome of Trichomonas vaginalis, which is found in several protozoan parasites. Our analysis has shown the importance of the Pareto optimality for such comparison and for insights into the evolution of the metabolism from cytoplasmic to organelle bound, involving a model order reduction. We report that Pareto fronts represent an asymptotic analysis useful to describe the metabolism of an organism aimed at maximizing concurrently two or more metabolite concentrations.

  3. Optimizing the Level of Renewable Electric R&D Expenditures Using Real Options Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Davis, G.; Owens, B.

    2003-02-01

    One of the primary objectives of the United States' federal non-hydro renewable electric R&D program is to promote the development of technologies that have the potential to provide consumers with stable and secure energy supplies. In order to quantify the benefits provided by continued federal renewable electric R&D, this paper uses ''real option'' pricing techniques to estimate the value of renewable electric technologies in the face of uncertain fossil fuel prices. Within the real options analysis framework, the current value of expected future supply from renewable electric technologies, net of federal R&D expenditures, is estimated to be $30.6 billion. Of this value, 86% can be attributed to past federal R&D efforts, and 14% can be attributed to future federal R&D efforts, assuming continued federal R&D funding at $300 million/year. In addition, real options analysis shows that the value of renewable electric technologies increases as current and future R&D funding levels increase. This indicates that the current level of federal renewable electric R&D funding is sub-optimal low.

  4. Optimal methodologies for terahertz time-domain spectroscopic analysis of traditional pigments in powder form

    Science.gov (United States)

    Ha, Taewoo; Lee, Howon; Sim, Kyung Ik; Kim, Jonghyeon; Jo, Young Chan; Kim, Jae Hoon; Baek, Na Yeon; Kang, Dai-ill; Lee, Han Hyoung

    2017-05-01

    We have established optimal methods for terahertz time-domain spectroscopic analysis of highly absorbing pigments in powder form based on our investigation of representative traditional Chinese pigments, such as azurite [blue-based color pigment], Chinese vermilion [red-based color pigment], and arsenic yellow [yellow-based color pigment]. To accurately extract the optical constants in the terahertz region of 0.1 - 3 THz, we carried out transmission measurements in such a way that intense absorption peaks did not completely suppress the transmission level. This required preparation of pellet samples with optimized thicknesses and material densities. In some cases, mixing the pigments with polyethylene powder was required to minimize absorption due to certain peak features. The resulting distortion-free terahertz spectra of the investigated set of pigment species exhibited well-defined unique spectral fingerprints. Our study will be useful to future efforts to establish non-destructive analysis methods of traditional pigments, to construct their spectral databases, and to apply these tools to restoration of cultural heritage materials.

  5. Isogeometric Analysis and Shape Optimization in Fluid Mechanics

    DEFF Research Database (Denmark)

    Nielsen, Peter Nørtoft

    This thesis brings together the fields of fluid mechanics, as the study of fluids and flows, isogeometric analysis, as a numerical method to solve engineering problems using computers, and shape optimization, as the art of finding "best" shapes of objects based on some notion of goodness. The flow...... approximations, and for shape optimization purposes also due to its tight connection between the analysis and geometry models. The thesis is initiated by short introductions to fluid mechanics, and to the building blocks of isogeometric analysis. As the first contribution of the thesis, a detailed description...... isogeometric analysis may serve as a natural framework for shape optimization within fluid mechanics. We construct an efficient regularization measure for avoiding inappropriate parametrizations during optimization, and various numerical examples of shape optimization for fluids are considered, serving...

  6. Dispositional Optimism

    Science.gov (United States)

    Carver, Charles S.; Scheier, Michael F.

    2014-01-01

    Optimism is a cognitive construct (expectancies regarding future outcomes) that also relates to motivation: optimistic people exert effort, whereas pessimistic people disengage from effort. Study of optimism began largely in health contexts, finding positive associations between optimism and markers of better psychological and physical health. Physical health effects likely occur through differences in both health-promoting behaviors and physiological concomitants of coping. Recently, the scientific study of optimism has extended to the realm of social relations: new evidence indicates that optimists have better social connections, partly because they work harder at them. In this review, we examine the myriad ways this trait can benefit an individual, and our current understanding of the biological basis of optimism. PMID:24630971

  7. Programming effort analysis of the ELLPACK language

    Science.gov (United States)

    Rice, J. R.

    1978-01-01

    ELLPACK is a problem statement language and system for elliptic partial differential equations which is implemented by a FORTRAN preprocessor. ELLPACK's principal purpose is as a tool for the performance evaluation of software. However, it is used here as an example with which to study the programming effort required for problem solving. It is obvious that problem statement languages can reduce programming effort tremendously; the goal is to quantify this somewhat. This is done by analyzing the lengths and effort (as measured by Halstead's software science technique) of various approaches to solving these problems.

  8. Effort in Multitasking: Local and Global Assessment of Effort.

    Science.gov (United States)

    Kiesel, Andrea; Dignath, David

    2017-01-01

    When performing multiple tasks in succession, self-organization of task order might be superior compared to external-controlled task schedules, because self-organization allows optimizing processing modes and thus reduces switch costs, and it increases commitment to task goals. However, self-organization is an additional executive control process that is not required if task order is externally specified and as such it is considered as time-consuming and effortful. To compare self-organized and externally controlled task scheduling, we suggest assessing global subjective and objectives measures of effort in addition to local performance measures. In our new experimental approach, we combined characteristics of dual tasking settings and task switching settings and compared local and global measures of effort in a condition with free choice of task sequence and a condition with cued task sequence. In a multi-tasking environment, participants chose the task order while the task requirement of the not-yet-performed task remained the same. This task preview allowed participants to work on the previously non-chosen items in parallel and resulted in faster responses and fewer errors in task switch trials than in task repetition trials. The free-choice group profited more from this task preview than the cued group when considering local performance measures. Nevertheless, the free-choice group invested more effort than the cued group when considering global measures. Thus, self-organization in task scheduling seems to be effortful even in conditions in which it is beneficiary for task processing. In a second experiment, we reduced the possibility of task preview for the not-yet-performed tasks in order to hinder efficient self-organization. Here neither local nor global measures revealed substantial differences between the free-choice and a cued task sequence condition. Based on the results of both experiments, we suggest that global assessment of effort in addition to

  9. Guideline adherence is worth the effort: a cost-effectiveness analysis in intrauterine insemination care.

    Science.gov (United States)

    Haagen, E C; Nelen, W L D M; Adang, E M; Grol, R P T M; Hermens, R P M G; Kremer, J A M

    2013-02-01

    a result, 415 infertile couples who started a total of 1803 IUI cycles were eligible for the cost-effectiveness analyses. Optimal adherence to the guideline recommendations about sperm quality, the total number of IUI cycles and dose of human chorionic gonadotrophin was cost-effective with an incremental net monetary benefit between € 645 and over € 7500 per couple, depending on the recommendation and assuming a willingness to pay € 20 000 for an ongoing pregnancy. Because not all recommendations applied to all 415 included couples, smaller groups were left for some of the cost-effectiveness analyses, and one integrated analysis with all recommendations within one model was impossible. Optimal guideline adherence in IUI care has substantial economic benefits when compared with suboptimal guideline adherence. For Europe, where over 144,000 IUI cycles are initiated each year to treat ≈ 32 000 infertile couples, this could mean a possible cost saving of at least 20 million euro yearly. Therefore, it is valuable to make an effort to improve guideline development and implementation.

  10. Systematic analysis of the heat exchanger arrangement problem using multi-objective genetic optimization

    International Nuclear Information System (INIS)

    Daróczy, László; Janiga, Gábor; Thévenin, Dominique

    2014-01-01

    A two-dimensional cross-flow tube bank heat exchanger arrangement problem with internal laminar flow is considered in this work. The objective is to optimize the arrangement of tubes and find the most favorable geometries, in order to simultaneously maximize the rate of heat exchange while obtaining a minimum pressure loss. A systematic study was performed involving a large number of simulations. The global optimization method NSGA-II was retained. A fully automatized in-house optimization environment was used to solve the problem, including mesh generation and CFD (computational fluid dynamics) simulations. The optimization was performed in parallel on a Linux cluster with a very good speed-up. The main purpose of this article is to illustrate and analyze a heat exchanger arrangement problem in its most general form and to provide a fundamental understanding of the structure of the Pareto front and optimal geometries. The considered conditions are particularly suited for low-power applications, as found in a growing number of practical systems in an effort toward increasing energy efficiency. For such a detailed analysis with more than 140 000 CFD-based evaluations, a design-of-experiment study involving a response surface would not be sufficient. Instead, all evaluations rely on a direct solution using a CFD solver. - Highlights: • Cross-flow tube bank heat exchanger arrangement problem. • A fully automatized multi-objective optimization based on genetic algorithm. • A systematic study involving a large number of CFD (computational fluid dynamics) simulations

  11. Efficient Reanalysis Procedures in Structural Topology Optimization

    DEFF Research Database (Denmark)

    Amir, Oded

    This thesis examines efficient solution procedures for the structural analysis problem within topology optimization. The research is motivated by the observation that when the nested approach to structural optimization is applied, most of the computational effort is invested in repeated solutions...... on approximate reanalysis. For cases where memory limitations require the utilization of iterative equation solvers, we suggest efficient procedures based on alternative termination criteria for such solvers. These approaches are tested on two- and three-dimensional topology optimization problems including...

  12. Efficient use of iterative solvers in nested topology optimization

    DEFF Research Database (Denmark)

    Amir, Oded; Stolpe, Mathias; Sigmund, Ole

    2010-01-01

    In the nested approach to structural optimization, most of the computational effort is invested in the solution of the analysis equations. In this study, it is suggested to reduce this computational cost by using an approximation to the solution of the analysis problem, generated by a Krylov....... The approximation is computationally shown to be sufficiently accurate for the purpose of optimization though the nested equation system is not necessarily solved accurately. The approach is tested on several large-scale topology optimization problems, including minimum compliance problems and compliant mechanism...

  13. Distributed Algorithms for Time Optimal Reachability Analysis

    DEFF Research Database (Denmark)

    Zhang, Zhengkui; Nielsen, Brian; Larsen, Kim Guldstrand

    2016-01-01

    . We propose distributed computing to accelerate time optimal reachability analysis. We develop five distributed state exploration algorithms, implement them in \\uppaal enabling it to exploit the compute resources of a dedicated model-checking cluster. We experimentally evaluate the implemented...... algorithms with four models in terms of their ability to compute near- or proven-optimal solutions, their scalability, time and memory consumption and communication overhead. Our results show that distributed algorithms work much faster than sequential algorithms and have good speedup in general.......Time optimal reachability analysis is a novel model based technique for solving scheduling and planning problems. After modeling them as reachability problems using timed automata, a real-time model checker can compute the fastest trace to the goal states which constitutes a time optimal schedule...

  14. Cost benefit analysis for optimization of radiation protection

    International Nuclear Information System (INIS)

    Lindell, B.

    1984-01-01

    ICRP recommends three basic principles for radiation protection. One is the justification of the source. Any use of radiation should be justified with regard to its benefit. The second is the optimization of radiation protection, i.e. all radiation exposure should be kept as low as resonably achievable. And the third principle is that there should be a limit for the radiation dose that any individual receives. Cost benefit assessment or cost benefit analysis is one tool to achieve the optimization, but the optimization is not identical with cost benefit analysis. Basically, in principle, the cost benefit analysis for the optimization of radiation protection is to find the minimum sum of the cost of protection and some cost of detriment. (Mori, K.)

  15. Optimization of time characteristics in activation analysis

    International Nuclear Information System (INIS)

    Gurvich, L.G.; Umaraliev, A.T.

    2006-01-01

    Full text: The activation analysis temporal characteristics optimization methods developed at present are aimed at determination of optimal values of the three important parameters - irradiation time, cooling time and measurement time. In the performed works, especially in [1-5] the activation analysis processes are described, the optimal values of optimization parameters are obtained from equations solved, and the computational results are given for these parameters for a number of elements. However, the equations presented in [2] were inaccurate, did not allow one to have optimization parameters results for one element content calculations, and it did not take into account background dependence of time. Therefore, we proposed modified equations to determine the optimal temporal parameters and iteration processes for the solution of these equations. It is well-known that the activity of studied sample during measurements does not change significantly, i.e. measurement time is much shorter than the half-life, thus the processes taking place can be described by the Poisson probability distribution, and in general case one can apply binomial distribution. The equation and iteration processes use in this research describe both probability distributions. Expectedly, the cooling time iteration expressions obtained for one element analysis case are similar for the both distribution types, as the optimised time values occurred to be of the same order as half-life values, whereas the cooling time, as we observed, depends on the ratio of the studied sample's peak value to the background peak, and can be significantly larger than the half-life value. This pattern is general, and can be derived from the optimized time expressions, which is supported by the experimental data on short-living isotopes [3,4]. For the isotopes with large half-lives, up to years, like cobalt-60, the cooling time values given in the above mentioned works are equal to months which, apparently

  16. Primer Vector Optimization: Survey of Theory, new Analysis and Applications

    Science.gov (United States)

    Guzman

    This paper presents a preliminary study in developing a set of optimization tools for orbit rendezvous, transfer and station keeping. This work is part of a large scale effort undergoing at NASA Goddard Space Flight Center and a.i. solutions, Inc. to build generic methods, which will enable missions with tight fuel budgets. Since no single optimization technique can solve efficiently all existing problems, a library of tools where the user could pick the method most suited for the particular mission is envisioned. The first trajectory optimization technique explored is Lawden's primer vector theory [Ref. 1]. Primer vector theory can be considered as a byproduct of applying Calculus of Variations (COV) techniques to the problem of minimizing the fuel usage of impulsive trajectories. For an n-impulse trajectory, it involves the solution of n-1 two-point boundary value problems. In this paper, we look at some of the different formulations of the primer vector (dependent on the frame employed and on the force model). Also, the applicability of primer vector theory is examined in effort to understand when and why the theory can fail. Specifically, since COV is based on "small variations", singularities in the linearized (variational) equations of motion along the arcs must be taken into account. These singularities are a recurring problem in analyzes that employ "small variations" [Refs. 2, 3]. For example, singularities in the (2-body problem) variational equations along elliptic arcs occur when [Ref. 4], 1) the difference between the initial and final times is a multiple of the reference orbit period, 2) the difference between the initial and final true anomalies are given by k, for k= 0, 1, 2, 3,..., note that this cover the 3) the time of flight is a minimum for the given difference in true anomaly. For the N-body problem, the situation is more complex and is still under investigation. Several examples, such as the initialization of an orbit (ascent trajectory) and

  17. Convex analysis and global optimization

    CERN Document Server

    Tuy, Hoang

    2016-01-01

    This book presents state-of-the-art results and methodologies in modern global optimization, and has been a staple reference for researchers, engineers, advanced students (also in applied mathematics), and practitioners in various fields of engineering. The second edition has been brought up to date and continues to develop a coherent and rigorous theory of deterministic global optimization, highlighting the essential role of convex analysis. The text has been revised and expanded to meet the needs of research, education, and applications for many years to come. Updates for this new edition include: · Discussion of modern approaches to minimax, fixed point, and equilibrium theorems, and to nonconvex optimization; · Increased focus on dealing more efficiently with ill-posed problems of global optimization, particularly those with hard constraints;

  18. Analysis of grinding of superalloys and ceramics for off-line process optimization

    Science.gov (United States)

    Sathyanarayanan, G.

    The present study has compared the performances of resinoid, vitrified, and electroplated CBN wheels in creep feed grinding of M42 and D2 tool steels. Responses such as a specific energy, normal and tangential forces, and surface roughness were used as measures of performance. It was found that creep feed grinding with resinoid, vitrified, and electroplated CBN wheels has its own advantages, but no single wheel could provide good finish, lower specific energy, and high material removal rates simultaneously. To optimize the CBN grinding with different bonded wheels, a Multiple Criteria Decision Making (MCDM) methodology was used. Creep feed grinding of superalloys, Ti-6Al-4V and Inconel 718, has been modeled by utilizing neural networks to optimize the grinding process. A parallel effort was directed at creep feed grinding of alumina ceramics with diamond wheels to investigate the influence of process variables on responses based on experimental results and statistical analysis. The conflicting influence of variables was observed. This led to the formulation of ceramic grinding process as a multi-objective nonlinear mixed integer problem.

  19. Contracting Fashion Products Supply Chains When Demand Is Dependent on Price and Sales Effort

    OpenAIRE

    Wei, Ying; Xiong, Liyang

    2015-01-01

    This paper investigates optimal decisions in a two-stage fashion product supply chain under two specified contracts: revenue-sharing contract and wholesale price contract, where demand is dependent on retailing price and sales effort level. Optimal decisions and related profits are analyzed and further compared among the cases where the effort investment fee is determined and undertaken either by the retailer or the manufacturer. Results reveal that if the retailer determines the effort inves...

  20. Analysis and Optimization of Distributed Real-Time Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2006-01-01

    and scheduling policies. In this context, the task of designing such systems is becoming increasingly difficult. The success of new adequate design methods depends on the availability of efficient analysis as well as optimization techniques. In this paper, we present both analysis and optimization approaches...... characteristic to this class of systems: mapping of functionality, the optimization of the access to the communication channel, and the assignment of scheduling policies to processes. Optimization heuristics aiming at producing a schedulable system, with a given amount of resources, are presented....

  1. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.

    Science.gov (United States)

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung

    2017-04-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.

  2. Integration of Rotor Aerodynamic Optimization with the Conceptual Design of a Large Civil Tiltrotor

    Science.gov (United States)

    Acree, C. W., Jr.

    2010-01-01

    Coupling of aeromechanics analysis with vehicle sizing is demonstrated with the CAMRAD II aeromechanics code and NDARC sizing code. The example is optimization of cruise tip speed with rotor/wing interference for the Large Civil Tiltrotor (LCTR2) concept design. Free-wake models were used for both rotors and the wing. This report is part of a NASA effort to develop an integrated analytical capability combining rotorcraft aeromechanics, structures, propulsion, mission analysis, and vehicle sizing. The present paper extends previous efforts by including rotor/wing interference explicitly in the rotor performance optimization and implicitly in the sizing.

  3. Reproductive effort accelerates actuarial senescence in wild birds : An experimental study

    NARCIS (Netherlands)

    Boonekamp, Jelle J.; Salomons, Martijn; Bouwhuis, Sandra; Dijkstra, Cornelis; Verhulst, Simon

    Optimality theories of ageing predict that the balance between reproductive effort and somatic maintenance determines the rate of ageing. Laboratory studies find that increased reproductive effort shortens lifespan, but through increased short-term mortality rather than ageing. In contrast, high

  4. Contracting Fashion Products Supply Chains When Demand Is Dependent on Price and Sales Effort

    Directory of Open Access Journals (Sweden)

    Ying Wei

    2015-01-01

    Full Text Available This paper investigates optimal decisions in a two-stage fashion product supply chain under two specified contracts: revenue-sharing contract and wholesale price contract, where demand is dependent on retailing price and sales effort level. Optimal decisions and related profits are analyzed and further compared among the cases where the effort investment fee is determined and undertaken either by the retailer or the manufacturer. Results reveal that if the retailer determines the effort investment level, she would be better off under the wholesale price contract and would invest more effort. However, if the manufacturer determines the effort level, he prefers to the revenue-sharing contract most likely if both parties agree on consignment.

  5. A rotor optimization using regression analysis

    Science.gov (United States)

    Giansante, N.

    1984-01-01

    The design and development of helicopter rotors is subject to the many design variables and their interactions that effect rotor operation. Until recently, selection of rotor design variables to achieve specified rotor operational qualities has been a costly, time consuming, repetitive task. For the past several years, Kaman Aerospace Corporation has successfully applied multiple linear regression analysis, coupled with optimization and sensitivity procedures, in the analytical design of rotor systems. It is concluded that approximating equations can be developed rapidly for a multiplicity of objective and constraint functions and optimizations can be performed in a rapid and cost effective manner; the number and/or range of design variables can be increased by expanding the data base and developing approximating functions to reflect the expanded design space; the order of the approximating equations can be expanded easily to improve correlation between analyzer results and the approximating equations; gradients of the approximating equations can be calculated easily and these gradients are smooth functions reducing the risk of numerical problems in the optimization; the use of approximating functions allows the problem to be started easily and rapidly from various initial designs to enhance the probability of finding a global optimum; and the approximating equations are independent of the analysis or optimization codes used.

  6. Analysis and Optimization of Mixed-Criticality Applications on Partitioned Distributed Architectures

    DEFF Research Database (Denmark)

    Tamas-Selicean, Domitian; Marinescu, S. O.; Pop, Paul

    2012-01-01

    Constrained (RC) messages, transmitted if there are no TT messages, and Best Effort (BE) messages. We assume that applications are scheduled using Static Cyclic Scheduling (SCS) or Fixed-Priority Preemptive Scheduling (FPS). We are interested in analysis and optimization methods and tools, which decide...... within predefined time slots, allocated on each processor. At the communication-level, TTEthernet uses the concepts of virtual links for the separation of mixed-criticality messages. TTEthernet integrates three types of traffic: Time-Triggered (TT) messages, transmitted based on schedule tables, Rate...... the mapping of tasks to PEs, the sequence and length of the time partitions on each PE and the schedule tables of the SCS tasks and TT messages, such that the applications are schedulable and the response times of FPS tasks and RC messages is minimized. We have proposed a Tabu Search-based meta...

  7. Subthalamic nucleus activity optimizes maximal effort motor responses in Parkinson's disease.

    Science.gov (United States)

    Anzak, Anam; Tan, Huiling; Pogosyan, Alek; Foltynie, Thomas; Limousin, Patricia; Zrinzo, Ludvic; Hariz, Marwan; Ashkan, Keyoumars; Bogdanovic, Marko; Green, Alexander L; Aziz, Tipu; Brown, Peter

    2012-09-01

    The neural substrates that enable individuals to achieve their fastest and strongest motor responses have long been enigmatic. Importantly, characterization of such activities may inform novel therapeutic strategies for patients with hypokinetic disorders, such as Parkinson's disease. Here, we ask whether the basal ganglia may play an important role, not only in the attainment of maximal motor responses under standard conditions but also in the setting of the performance enhancements known to be engendered by delivery of intense stimuli. To this end, we recorded local field potentials from deep brain stimulation electrodes implanted bilaterally in the subthalamic nuclei of 10 patients with Parkinson's disease, as they executed their fastest and strongest handgrips in response to a visual cue, which was accompanied by a brief 96-dB auditory tone on random trials. We identified a striking correlation between both theta/alpha (5-12 Hz) and high-gamma/high-frequency (55-375 Hz) subthalamic nucleus activity and force measures, which explained close to 70% of interindividual variance in maximal motor responses to the visual cue alone, when patients were ON their usual dopaminergic medication. Loud auditory stimuli were found to enhance reaction time and peak rate of development of force still further, independent of whether patients were ON or OFF l-DOPA, and were associated with increases in subthalamic nucleus power over a broad gamma range. However, the contribution of this broad gamma activity to the performance enhancements observed was only modest (≤13%). The results implicate frequency-specific subthalamic nucleus activities as substantial factors in optimizing an individual's peak motor responses at maximal effort of will, but much less so in the performance increments engendered by intense auditory stimuli.

  8. CellSort: a support vector machine tool for optimizing fluorescence-activated cell sorting and reducing experimental effort.

    Science.gov (United States)

    Yu, Jessica S; Pertusi, Dante A; Adeniran, Adebola V; Tyo, Keith E J

    2017-03-15

    High throughput screening by fluorescence activated cell sorting (FACS) is a common task in protein engineering and directed evolution. It can also be a rate-limiting step if high false positive or negative rates necessitate multiple rounds of enrichment. Current FACS software requires the user to define sorting gates by intuition and is practically limited to two dimensions. In cases when multiple rounds of enrichment are required, the software cannot forecast the enrichment effort required. We have developed CellSort, a support vector machine (SVM) algorithm that identifies optimal sorting gates based on machine learning using positive and negative control populations. CellSort can take advantage of more than two dimensions to enhance the ability to distinguish between populations. We also present a Bayesian approach to predict the number of sorting rounds required to enrich a population from a given library size. This Bayesian approach allowed us to determine strategies for biasing the sorting gates in order to reduce the required number of enrichment rounds. This algorithm should be generally useful for improve sorting outcomes and reducing effort when using FACS. Source code available at http://tyolab.northwestern.edu/tools/ . k-tyo@northwestern.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  9. Power and performance software analysis and optimization

    CERN Document Server

    Kukunas, Jim

    2015-01-01

    Power and Performance: Software Analysis and Optimization is a guide to solving performance problems in modern Linux systems. Power-efficient chips are no help if the software those chips run on is inefficient. Starting with the necessary architectural background as a foundation, the book demonstrates the proper usage of performance analysis tools in order to pinpoint the cause of performance problems, and includes best practices for handling common performance issues those tools identify. Provides expert perspective from a key member of Intel's optimization team on how processors and memory

  10. Development of GPT-based optimization algorithm

    International Nuclear Information System (INIS)

    White, J.R.; Chapman, D.M.; Biswas, D.

    1985-01-01

    The University of Lowell and Westinghouse Electric Corporation are involved in a joint effort to evaluate the potential benefits of generalized/depletion perturbation theory (GPT/DTP) methods for a variety of light water reactor (LWR) physics applications. One part of that work has focused on the development of a GPT-based optimization algorithm for the overall design, analysis, and optimization of LWR reload cores. The use of GPT sensitivity data in formulating the fuel management optimization problem is conceptually straightforward; it is the actual execution of the concept that is challenging. Thus, the purpose of this paper is to address some of the major difficulties, to outline our approach to these problems, and to present some illustrative examples of an efficient GTP-based optimization scheme

  11. Analysis and Optimization of Heterogeneous Real-Time Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2005-01-01

    . The success of such new design methods depends on the availability of analysis and optimization techniques. In this paper, we present analysis and optimization techniques for heterogeneous real-time embedded systems. We address in more detail a particular class of such systems called multi-clusters, composed...... to frames. Optimization heuristics for frame packing aiming at producing a schedulable system are presented. Extensive experiments and a real-life example show the efficiency of the frame-packing approach....

  12. Exergy analysis and optimization of a biomass gasification, solid oxide fuel cell and micro gas turbine hybrid system

    DEFF Research Database (Denmark)

    Bang-Møller, Christian; Rokni, Masoud; Elmegaard, Brian

    2011-01-01

    and exergy analyses were applied. Focus in this optimization study was heat management, and the optimization efforts resulted in a substantial gain of approximately 6% in the electrical efficiency of the plant. The optimized hybrid plant produced approximately 290 kWe at an electrical efficiency of 58...

  13. Cost-Optimal Analysis for Nearly Zero Energy Buildings Design and Optimization: A Critical Review

    Directory of Open Access Journals (Sweden)

    Maria Ferrara

    2018-06-01

    Full Text Available Since the introduction of the recast of the EPBD European Directive 2010/31/EU, many studies on the cost-effective feasibility of nearly zero-energy buildings (NZEBs were carried out either by academic research bodies and by national bodies. In particular, the introduction of the cost-optimal methodology has given a strong impulse to research in this field. This paper presents a comprehensive and significant review on scientific works based on the application of cost-optimal analysis applications in Europe since the EPBD recast entered into force, pointing out the differences in the analyzed studies and comparing their outcomes before the new recast of EPBD enters into force in 2018. The analysis is conducted with special regard to the methods used for the energy performance assessment, the global cost calculation, and for the selection of the energy efficiency measures leading to design optimization. A critical discussion about the assumptions on which the studies are based and the resulting gaps between the resulting cost-optimal performance and the zero energy target is provided together with a summary of the resulting cost-optimal set of technologies to be used for cost-optimal NZEB design in different contexts. It is shown that the cost-optimal approach results as an effective method for delineating the future of NZEB design throughout Europe while emerging criticalities and open research issues are presented.

  14. Nonlinear analysis approximation theory, optimization and applications

    CERN Document Server

    2014-01-01

    Many of our daily-life problems can be written in the form of an optimization problem. Therefore, solution methods are needed to solve such problems. Due to the complexity of the problems, it is not always easy to find the exact solution. However, approximate solutions can be found. The theory of the best approximation is applicable in a variety of problems arising in nonlinear functional analysis and optimization. This book highlights interesting aspects of nonlinear analysis and optimization together with many applications in the areas of physical and social sciences including engineering. It is immensely helpful for young graduates and researchers who are pursuing research in this field, as it provides abundant research resources for researchers and post-doctoral fellows. This will be a valuable addition to the library of anyone who works in the field of applied mathematics, economics and engineering.

  15. Applications of functional analysis to optimal control problems

    International Nuclear Information System (INIS)

    Mizukami, K.

    1976-01-01

    Some basic concepts in functional analysis, a general norm, the Hoelder inequality, functionals and the Hahn-Banach theorem are described; a mathematical formulation of two optimal control problems is introduced by the method of functional analysis. The problem of time-optimal control systems with both norm constraints on control inputs and on state variables at discrete intermediate times is formulated as an L-problem in the theory of moments. The simplex method is used for solving a non-linear minimizing problem inherent in the functional analysis solution to this problem. Numerical results are presented for a train operation. The second problem is that of optimal control of discrete linear systems with quadratic cost functionals. The problem is concerned with the case of unconstrained control and fixed endpoints. This problem is formulated in terms of norms of functionals on suitable Banach spaces. (author)

  16. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    Science.gov (United States)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  17. An Automated DAKOTA and VULCAN-CFD Framework with Application to Supersonic Facility Nozzle Flowpath Optimization

    Science.gov (United States)

    Axdahl, Erik L.

    2015-01-01

    Removing human interaction from design processes by using automation may lead to gains in both productivity and design precision. This memorandum describes efforts to incorporate high fidelity numerical analysis tools into an automated framework and applying that framework to applications of practical interest. The purpose of this effort was to integrate VULCAN-CFD into an automated, DAKOTA-enabled framework with a proof-of-concept application being the optimization of supersonic test facility nozzles. It was shown that the optimization framework could be deployed on a high performance computing cluster with the flow of information handled effectively to guide the optimization process. Furthermore, the application of the framework to supersonic test facility nozzle flowpath design and optimization was demonstrated using multiple optimization algorithms.

  18. Behavioral Assessment of Listening Effort Using a Dual-Task Paradigm

    Directory of Open Access Journals (Sweden)

    Jean-Pierre Gagné

    2017-01-01

    Full Text Available Published investigations (n = 29 in which a dual-task experimental paradigm was employed to measure listening effort during speech understanding in younger and older adults were reviewed. A summary of the main findings reported in the articles is provided with respect to the participants’ age-group and hearing status. Effects of different signal characteristics, such as the test modality, on dual-task outcomes are evaluated, and associations with cognitive abilities and self-report measures of listening effort are described. Then, several procedural issues associated with the use of dual-task experiment paradigms are discussed. Finally, some issues that warrant future research are addressed. The review revealed large variability in the dual-task experimental paradigms that have been used to measure the listening effort expended during speech understanding. The differences in experimental procedures used across studies make it difficult to draw firm conclusions concerning the optimal choice of dual-task paradigm or the sensitivity of specific paradigms to different types of experimental manipulations. In general, the analysis confirmed that dual-task paradigms have been used successfully to measure differences in effort under different experimental conditions, in both younger and older adults. Several research questions that warrant further investigation in order to better understand and characterize the intricacies of dual-task paradigms were identified.

  19. Morphology Analysis and Optimization: Crucial Factor Determining the Performance of Perovskite Solar Cells

    Directory of Open Access Journals (Sweden)

    Wenjin Zeng

    2017-03-01

    Full Text Available This review presents an overall discussion on the morphology analysis and optimization for perovskite (PVSK solar cells. Surface morphology and energy alignment have been proven to play a dominant role in determining the device performance. The effect of the key parameters such as solution condition and preparation atmosphere on the crystallization of PVSK, the characterization of surface morphology and interface distribution in the perovskite layer is discussed in detail. Furthermore, the analysis of interface energy level alignment by using X-ray photoelectron spectroscopy and ultraviolet photoelectron spectroscopy is presented to reveals the correlation between morphology and charge generation and collection within the perovskite layer, and its influence on the device performance. The techniques including architecture modification, solvent annealing, etc. were reviewed as an efficient approach to improve the morphology of PVSK. It is expected that further progress will be achieved with more efforts devoted to the insight of the mechanism of surface engineering in the field of PVSK solar cells.

  20. Efficient use of iterative solvers in nested topology optimization

    DEFF Research Database (Denmark)

    Amir, Oded; Stolpe, Mathias; Sigmund, Ole

    2009-01-01

    In the nested approach to structural optimization, most of the computational effort is invested in the solution of the finite element analysis equations. In this study, it is suggested to reduce this computational cost by using an approximation to the solution of the nested problem, generated...... measures. The approximation is shown to be sufficiently accurate for the practical purpose of optimization even though the nested equation system is not solved accurately. The approach is tested on several medium-scale topology optimization problems, including three dimensional minimum compliance problems...

  1. Evaluation of Analysis by Cross-Validation, Part II: Diagnostic and Optimization of Analysis Error Covariance

    Directory of Open Access Journals (Sweden)

    Richard Ménard

    2018-02-01

    Full Text Available We present a general theory of estimation of analysis error covariances based on cross-validation as well as a geometric interpretation of the method. In particular, we use the variance of passive observation-minus-analysis residuals and show that the true analysis error variance can be estimated, without relying on the optimality assumption. This approach is used to obtain near optimal analyses that are then used to evaluate the air quality analysis error using several different methods at active and passive observation sites. We compare the estimates according to the method of Hollingsworth-Lönnberg, Desroziers et al., a new diagnostic we developed, and the perceived analysis error computed from the analysis scheme, to conclude that, as long as the analysis is near optimal, all estimates agree within a certain error margin.

  2. Design optimization and uncertainty analysis of SMA morphing structures

    International Nuclear Information System (INIS)

    Oehler, S D; Hartl, D J; Lopez, R; Malak, R J; Lagoudas, D C

    2012-01-01

    The continuing implementation of shape memory alloys (SMAs) as lightweight solid-state actuators in morphing structures has now motivated research into finding optimized designs for use in aerospace control systems. This work proposes methods that use iterative analysis techniques to determine optimized designs for morphing aerostructures and consider the impact of uncertainty in model variables on the solution. A combination of commercially available and custom coded tools is utilized. ModelCenter, a suite of optimization algorithms and simulation process management tools, is coupled with the Abaqus finite element analysis suite and a custom SMA constitutive model to assess morphing structure designs in an automated fashion. The chosen case study involves determining the optimized configuration of a morphing aerostructure assembly that includes SMA flexures. This is accomplished by altering design inputs representing the placement of active components to minimize a specified cost function. An uncertainty analysis is also conducted using design of experiment methods to determine the sensitivity of the solution to a set of uncertainty variables. This second study demonstrates the effective use of Monte Carlo techniques to simulate the variance of model variables representing the inherent uncertainty in component fabrication processes. This paper outlines the modeling tools used to execute each case study, details the procedures for constructing the optimization problem and uncertainty analysis, and highlights the results from both studies. (paper)

  3. Logic-based methods for optimization combining optimization and constraint satisfaction

    CERN Document Server

    Hooker, John

    2011-01-01

    A pioneering look at the fundamental role of logic in optimization and constraint satisfaction While recent efforts to combine optimization and constraint satisfaction have received considerable attention, little has been said about using logic in optimization as the key to unifying the two fields. Logic-Based Methods for Optimization develops for the first time a comprehensive conceptual framework for integrating optimization and constraint satisfaction, then goes a step further and shows how extending logical inference to optimization allows for more powerful as well as flexible

  4. What makes a reach movement effortful? Physical effort discounting supports common minimization principles in decision making and motor control.

    Directory of Open Access Journals (Sweden)

    Pierre Morel

    2017-06-01

    Full Text Available When deciding between alternative options, a rational agent chooses on the basis of the desirability of each outcome, including associated costs. As different options typically result in different actions, the effort associated with each action is an essential cost parameter. How do humans discount physical effort when deciding between movements? We used an action-selection task to characterize how subjective effort depends on the parameters of arm transport movements and controlled for potential confounding factors such as delay discounting and performance. First, by repeatedly asking subjects to choose between 2 arm movements of different amplitudes or durations, performed against different levels of force, we identified parameter combinations that subjects experienced as identical in effort (isoeffort curves. Movements with a long duration were judged more effortful than short-duration movements against the same force, while movement amplitudes did not influence effort. Biomechanics of the movements also affected effort, as movements towards the body midline were preferred to movements away from it. Second, by introducing movement repetitions, we further determined that the cost function for choosing between effortful movements had a quadratic relationship with force, while choices were made on the basis of the logarithm of these costs. Our results show that effort-based action selection during reaching cannot easily be explained by metabolic costs. Instead, force-loaded reaches, a widely occurring natural behavior, imposed an effort cost for decision making similar to cost functions in motor control. Our results thereby support the idea that motor control and economic choice are governed by partly overlapping optimization principles.

  5. Competing probabilistic models for catch-effort relationships in wildlife censuses

    Energy Technology Data Exchange (ETDEWEB)

    Skalski, J.R.; Robson, D.S.; Matsuzaki, C.L.

    1983-01-01

    Two probabilistic models are presented for describing the chance that an animal is captured during a wildlife census, as a function of trapping effort. The models in turn are used to propose relationships between sampling intensity and catch-per-unit-effort (C.P.U.E.) that were field tested on small mammal populations. Capture data suggests a model of diminshing C.P.U.E. with increasing levels of trapping intensity. The catch-effort model is used to illustrate optimization procedures in the design of mark-recapture experiments for censusing wild populations. 14 references, 2 tables.

  6. Grey Wolf Optimizer Based on Powell Local Optimization Method for Clustering Analysis

    Directory of Open Access Journals (Sweden)

    Sen Zhang

    2015-01-01

    Full Text Available One heuristic evolutionary algorithm recently proposed is the grey wolf optimizer (GWO, inspired by the leadership hierarchy and hunting mechanism of grey wolves in nature. This paper presents an extended GWO algorithm based on Powell local optimization method, and we call it PGWO. PGWO algorithm significantly improves the original GWO in solving complex optimization problems. Clustering is a popular data analysis and data mining technique. Hence, the PGWO could be applied in solving clustering problems. In this study, first the PGWO algorithm is tested on seven benchmark functions. Second, the PGWO algorithm is used for data clustering on nine data sets. Compared to other state-of-the-art evolutionary algorithms, the results of benchmark and data clustering demonstrate the superior performance of PGWO algorithm.

  7. Rationally reduced libraries for combinatorial pathway optimization minimizing experimental effort.

    Science.gov (United States)

    Jeschek, Markus; Gerngross, Daniel; Panke, Sven

    2016-03-31

    Rational flux design in metabolic engineering approaches remains difficult since important pathway information is frequently not available. Therefore empirical methods are applied that randomly change absolute and relative pathway enzyme levels and subsequently screen for variants with improved performance. However, screening is often limited on the analytical side, generating a strong incentive to construct small but smart libraries. Here we introduce RedLibs (Reduced Libraries), an algorithm that allows for the rational design of smart combinatorial libraries for pathway optimization thereby minimizing the use of experimental resources. We demonstrate the utility of RedLibs for the design of ribosome-binding site libraries by in silico and in vivo screening with fluorescent proteins and perform a simple two-step optimization of the product selectivity in the branched multistep pathway for violacein biosynthesis, indicating a general applicability for the algorithm and the proposed heuristics. We expect that RedLibs will substantially simplify the refactoring of synthetic metabolic pathways.

  8. Aerodynamic multi-objective integrated optimization based on principal component analysis

    Directory of Open Access Journals (Sweden)

    Jiangtao HUANG

    2017-08-01

    Full Text Available Based on improved multi-objective particle swarm optimization (MOPSO algorithm with principal component analysis (PCA methodology, an efficient high-dimension multi-objective optimization method is proposed, which, as the purpose of this paper, aims to improve the convergence of Pareto front in multi-objective optimization design. The mathematical efficiency, the physical reasonableness and the reliability in dealing with redundant objectives of PCA are verified by typical DTLZ5 test function and multi-objective correlation analysis of supercritical airfoil, and the proposed method is integrated into aircraft multi-disciplinary design (AMDEsign platform, which contains aerodynamics, stealth and structure weight analysis and optimization module. Then the proposed method is used for the multi-point integrated aerodynamic optimization of a wide-body passenger aircraft, in which the redundant objectives identified by PCA are transformed to optimization constraints, and several design methods are compared. The design results illustrate that the strategy used in this paper is sufficient and multi-point design requirements of the passenger aircraft are reached. The visualization level of non-dominant Pareto set is improved by effectively reducing the dimension without losing the primary feature of the problem.

  9. Analysis and design optimization of flexible pavement

    Energy Technology Data Exchange (ETDEWEB)

    Mamlouk, M.S.; Zaniewski, J.P.; He, W.

    2000-04-01

    A project-level optimization approach was developed to minimize total pavement cost within an analysis period. Using this approach, the designer is able to select the optimum initial pavement thickness, overlay thickness, and overlay timing. The model in this approach is capable of predicting both pavement performance and condition in terms of roughness, fatigue cracking, and rutting. The developed model combines the American Association of State Highway and Transportation Officials (AASHTO) design procedure and the mechanistic multilayer elastic solution. The Optimization for Pavement Analysis (OPA) computer program was developed using the prescribed approach. The OPA program incorporates the AASHTO equations, the multilayer elastic system ELSYM5 model, and the nonlinear dynamic programming optimization technique. The program is PC-based and can run in either a Windows 3.1 or a Windows 95 environment. Using the OPA program, a typical pavement section was analyzed under different traffic volumes and material properties. The optimum design strategy that produces the minimum total pavement cost in each case was determined. The initial construction cost, overlay cost, highway user cost, and total pavement cost were also calculated. The methodology developed during this research should lead to more cost-effective pavements for agencies adopting the recommended analysis methods.

  10. Tool Support for Software Lookup Table Optimization

    Directory of Open Access Journals (Sweden)

    Chris Wilcox

    2011-01-01

    Full Text Available A number of scientific applications are performance-limited by expressions that repeatedly call costly elementary functions. Lookup table (LUT optimization accelerates the evaluation of such functions by reusing previously computed results. LUT methods can speed up applications that tolerate an approximation of function results, thereby achieving a high level of fuzzy reuse. One problem with LUT optimization is the difficulty of controlling the tradeoff between performance and accuracy. The current practice of manual LUT optimization adds programming effort by requiring extensive experimentation to make this tradeoff, and such hand tuning can obfuscate algorithms. In this paper we describe a methodology and tool implementation to improve the application of software LUT optimization. Our Mesa tool implements source-to-source transformations for C or C++ code to automate the tedious and error-prone aspects of LUT generation such as domain profiling, error analysis, and code generation. We evaluate Mesa with five scientific applications. Our results show a performance improvement of 3.0× and 6.9× for two molecular biology algorithms, 1.4× for a molecular dynamics program, 2.1× to 2.8× for a neural network application, and 4.6× for a hydrology calculation. We find that Mesa enables LUT optimization with more control over accuracy and less effort than manual approaches.

  11. Context-dependent memory decay is evidence of effort minimization in motor learning: a computational study.

    Science.gov (United States)

    Takiyama, Ken

    2015-01-01

    Recent theoretical models suggest that motor learning includes at least two processes: error minimization and memory decay. While learning a novel movement, a motor memory of the movement is gradually formed to minimize the movement error between the desired and actual movements in each training trial, but the memory is slightly forgotten in each trial. The learning effects of error minimization trained with a certain movement are partially available in other non-trained movements, and this transfer of the learning effect can be reproduced by certain theoretical frameworks. Although most theoretical frameworks have assumed that a motor memory trained with a certain movement decays at the same speed during performing the trained movement as non-trained movements, a recent study reported that the motor memory decays faster during performing the trained movement than non-trained movements, i.e., the decay rate of motor memory is movement or context dependent. Although motor learning has been successfully modeled based on an optimization framework, e.g., movement error minimization, the type of optimization that can lead to context-dependent memory decay is unclear. Thus, context-dependent memory decay raises the question of what is optimized in motor learning. To reproduce context-dependent memory decay, I extend a motor primitive framework. Specifically, I introduce motor effort optimization into the framework because some previous studies have reported the existence of effort optimization in motor learning processes and no conventional motor primitive model has yet considered the optimization. Here, I analytically and numerically revealed that context-dependent decay is a result of motor effort optimization. My analyses suggest that context-dependent decay is not merely memory decay but is evidence of motor effort optimization in motor learning.

  12. Context-dependent memory decay is evidence of effort minimization in motor learning: A computational study

    Directory of Open Access Journals (Sweden)

    Ken eTakiyama

    2015-02-01

    Full Text Available Recent theoretical models suggest that motor learning includes at least two processes: error minimization and memory decay. While learning a novel movement, a motor memory of the movement is gradually formed to minimize the movement error between the desired and actual movements in each training trial, but the memory is slightly forgotten in each trial. The learning effects of error minimization trained with a certain movement are partially available in other non-trained movements, and this transfer of the learning effect can be reproduced by certain theoretical frameworks. Although most theoretical frameworks have assumed that a motor memory trained with a certain movement decays at the same speed during performing the trained movement as non-trained movements, a recent study reported that the motor memory decays faster during performing the trained movement than non-trained movements, i.e., the decay rate of motor memory is movement or context dependent. Although motor learning has been successfully modeled based on an optimization framework, e.g., movement error minimization, the type of optimization that can lead to context-dependent memory decay is unclear. Thus, context-dependent memory decay raises the question of what is optimized in motor learning. To reproduce context-dependent memory decay, I extend a motor primitive framework. Specifically, I introduce motor effort optimization into the framework because some previous studies have reported the existence of effort optimization in motor learning processes and no conventional motor primitive model has yet considered the optimization. Here, I analytically and numerically revealed that context-dependent decay is a result of motor effort optimization. My analyses suggest that context-dependent decay is not merely memory decay but is evidence of motor effort optimization in motor learning.

  13. A procedure for multi-objective optimization of tire design parameters

    Directory of Open Access Journals (Sweden)

    Nikola Korunović

    2015-04-01

    Full Text Available The identification of optimal tire design parameters for satisfying different requirements, i.e. tire performance characteristics, plays an essential role in tire design. In order to improve tire performance characteristics, formulation and solving of multi-objective optimization problem must be performed. This paper presents a multi-objective optimization procedure for determination of optimal tire design parameters for simultaneous minimization of strain energy density at two distinctive zones inside the tire. It consists of four main stages: pre-analysis, design of experiment, mathematical modeling and multi-objective optimization. Advantage of the proposed procedure is reflected in the fact that multi-objective optimization is based on the Pareto concept, which enables design engineers to obtain a complete set of optimization solutions and choose a suitable tire design. Furthermore, modeling of the relationships between tire design parameters and objective functions based on multiple regression analysis minimizes computational and modeling effort. The adequacy of the proposed tire design multi-objective optimization procedure has been validated by performing experimental trials based on finite element method.

  14. Recent Advances in Multidisciplinary Analysis and Optimization, part 3

    Science.gov (United States)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: aircraft design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  15. Optimization and Validation of the Developed Uranium Isotopic Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J. H.; Kang, M. Y.; Kim, Jinhyeong; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of)

    2014-10-15

    γ-ray spectroscopy is a representative non-destructive assay for nuclear material, and less time-consuming and less expensive than the destructive analysis method. The destructive technique is more precise than NDA technique, however, there is some correction algorithm which can improve the performance of γ-spectroscopy. For this reason, an analysis code for uranium isotopic analysis is developed by Applied Nuclear Physics Group in Seoul National University. Overlapped γ- and x-ray peaks in the 89-101 keV X{sub α}-region are fitted with Gaussian and Lorentzian distribution peak functions, tail and background functions. In this study, optimizations for the full-energy peak efficiency calibration and fitting parameters of peak tail and background are performed, and validated with 24 hour acquisition of CRM uranium samples. The optimization of peak tail and background parameters are performed with the validation by using CRM uranium samples. The analysis performance is improved in HEU samples, but more optimization of fitting parameters is required in LEU sample analysis. In the future, the optimization research about the fitting parameters with various type of uranium samples will be performed. {sup 234}U isotopic analysis algorithms and correction algorithms (coincidence effect, self-attenuation effect) will be developed.

  16. Design optimization and analysis of selected thermal devices using self-adaptive Jaya algorithm

    International Nuclear Information System (INIS)

    Rao, R.V.; More, K.C.

    2017-01-01

    Highlights: • Self-adaptive Jaya algorithm is proposed for optimal design of thermal devices. • Optimization of heat pipe, cooling tower, heat sink and thermo-acoustic prime mover is presented. • Results of the proposed algorithm are better than the other optimization techniques. • The proposed algorithm may be conveniently used for the optimization of other devices. - Abstract: The present study explores the use of an improved Jaya algorithm called self-adaptive Jaya algorithm for optimal design of selected thermal devices viz; heat pipe, cooling tower, honeycomb heat sink and thermo-acoustic prime mover. Four different optimization case studies of the selected thermal devices are presented. The researchers had attempted the same design problems in the past using niched pareto genetic algorithm (NPGA), response surface method (RSM), leap-frog optimization program with constraints (LFOPC) algorithm, teaching-learning based optimization (TLBO) algorithm, grenade explosion method (GEM) and multi-objective genetic algorithm (MOGA). The results achieved by using self-adaptive Jaya algorithm are compared with those achieved by using the NPGA, RSM, LFOPC, TLBO, GEM and MOGA algorithms. The self-adaptive Jaya algorithm is proved superior as compared to the other optimization methods in terms of the results, computational effort and function evalutions.

  17. Hydrogen economy: a little bit more effort

    International Nuclear Information System (INIS)

    Pauron, M.

    2008-01-01

    In few years, the use of hydrogen in economy has become a credible possibility. Today, billions of euros are invested in the hydrogen industry which is strengthened by technological advances in fuel cells development and by an increasing optimism. However, additional research efforts and more financing will be necessary to make the dream of an hydrogen-based economy a reality

  18. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    Science.gov (United States)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  19. Optimal unemployment insurance with monitoring and sanctions

    NARCIS (Netherlands)

    Boone, J.; Fredriksson, P.; Holmlund, B.; van Ours, J.C.

    2007-01-01

    This article analyses the design of optimal unemployment insurance in a search equilibrium framework where search effort among the unemployed is not perfectly observable. We examine to what extent the optimal policy involves monitoring of search effort and benefit sanctions if observed search is

  20. Optimal depth-based regional frequency analysis

    Science.gov (United States)

    Wazneh, H.; Chebana, F.; Ouarda, T. B. M. J.

    2013-06-01

    Classical methods of regional frequency analysis (RFA) of hydrological variables face two drawbacks: (1) the restriction to a particular region which can lead to a loss of some information and (2) the definition of a region that generates a border effect. To reduce the impact of these drawbacks on regional modeling performance, an iterative method was proposed recently, based on the statistical notion of the depth function and a weight function φ. This depth-based RFA (DBRFA) approach was shown to be superior to traditional approaches in terms of flexibility, generality and performance. The main difficulty of the DBRFA approach is the optimal choice of the weight function ϕ (e.g., φ minimizing estimation errors). In order to avoid a subjective choice and naïve selection procedures of φ, the aim of the present paper is to propose an algorithm-based procedure to optimize the DBRFA and automate the choice of ϕ according to objective performance criteria. This procedure is applied to estimate flood quantiles in three different regions in North America. One of the findings from the application is that the optimal weight function depends on the considered region and can also quantify the region's homogeneity. By comparing the DBRFA to the canonical correlation analysis (CCA) method, results show that the DBRFA approach leads to better performances both in terms of relative bias and mean square error.

  1. Mixed-Integer Nonconvex Quadratic Optimization Relaxations and Performance Analysis

    Science.gov (United States)

    2016-10-11

    stationary point. These results are the state of art in complexity analysis of non-convex optimization. “Complexity of Unconstrained L2-Lp Minimization...Parameter Optimized Radiation Therapy ( SPORT )” (M Zarepisheh, Y Ye, S Boyd, R Li, L Xing), Medical Physics 41(6) (2014) 292-292. Station parameter...optimized radiation therapy ( SPORT ) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in

  2. Turbine Airfoil Optimization Using Quasi-3D Analysis Codes

    Directory of Open Access Journals (Sweden)

    Sanjay Goel

    2009-01-01

    Full Text Available A new approach to optimize the geometry of a turbine airfoil by simultaneously designing multiple 2D sections of the airfoil is presented in this paper. The complexity of 3D geometry modeling is circumvented by generating multiple 2D airfoil sections and constraining their geometry in the radial direction using first- and second-order polynomials that ensure smoothness in the radial direction. The flow fields of candidate geometries obtained during optimization are evaluated using a quasi-3D, inviscid, CFD analysis code. An inviscid flow solver is used to reduce the execution time of the analysis. Multiple evaluation criteria based on the Mach number profile obtained from the analysis of each airfoil cross-section are used for computing a quality metric. A key contribution of the paper is the development of metrics that emulate the perception of the human designer in visually evaluating the Mach Number distribution. A mathematical representation of the evaluation criteria coupled with a parametric geometry generator enables the use of formal optimization techniques in the design. The proposed approach is implemented in the optimal design of a low-pressure turbine nozzle.

  3. Optimizing performance in a self-conducted open-quotes Rightsizingclose quotes effort

    International Nuclear Information System (INIS)

    Annon, M.C.

    1996-01-01

    The differentiation among open-quotes rightsizing,close quotes open-quotes downsizing,close quotes and open-quotes reengineeringclose quotes has been lost by many organizations. Also, unfortunately of late, the approaches to improved competitiveness and worker productivity are being viewed, by many companies, as neither worth the effort nor achieving the desired results. In some cases, the effort may actually create more negative results. This type of negative perception has been documented in a variety of sources that include the following: (1) less than half the companies reporting improvements in operating profits after the cuts were made, (2) the processes resulting in significantly degraded morale among more than 75% of the employees, (3) less than one-third of organizations reporting improvements in worker productivity, and (4) even Michael Hammer (the reengineering open-quotes guruclose quotes) believing that more than 50% and maybe as much as 70% of the organizations do not achieve the intended results. This paper describes an integrated organizational review process that was applied within a nuclear utility that did achieve the desired results

  4. Orthogonal Analysis Based Performance Optimization for Vertical Axis Wind Turbine

    Directory of Open Access Journals (Sweden)

    Lei Song

    2016-01-01

    Full Text Available Geometrical shape of a vertical axis wind turbine (VAWT is composed of multiple structural parameters. Since there are interactions among the structural parameters, traditional research approaches, which usually focus on one parameter at a time, cannot obtain performance of the wind turbine accurately. In order to exploit overall effect of a novel VAWT, we firstly use a single parameter optimization method to obtain optimal values of the structural parameters, respectively, by Computational Fluid Dynamics (CFD method; based on the results, we then use an orthogonal analysis method to investigate the influence of interactions of the structural parameters on performance of the wind turbine and to obtain optimization combination of the structural parameters considering the interactions. Results of analysis of variance indicate that interactions among the structural parameters have influence on performance of the wind turbine, and optimization results based on orthogonal analysis have higher wind energy utilization than that of traditional research approaches.

  5. Modeling, Analysis, and Optimization Issues for Large Space Structures

    Science.gov (United States)

    Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)

    1983-01-01

    Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.

  6. The optimal amount and allocation of of sampling effort for plant health inspection

    NARCIS (Netherlands)

    Surkov, I.; Oude Lansink, A.G.J.M.; Werf, van der W.

    2009-01-01

    Plant import inspection can prevent the introduction of exotic pests and diseases, thereby averting economic losses. We explore the optimal allocation of a fixed budget, taking into account risk differentials, and the optimal-sized budget to minimise total pest costs. A partial-equilibrium market

  7. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  8. Enteric disease surveillance under the AFHSC-GEIS: Current efforts, landscape analysis and vision forward

    Directory of Open Access Journals (Sweden)

    Kasper Matthew R

    2011-03-01

    Full Text Available Abstract The mission of the Armed Forces Health Surveillance Center, Division of Global Emerging Infections Surveillance and Response System (AFHSC-GEIS is to support global public health and to counter infectious disease threats to the United States Armed Forces, including newly identified agents or those increasing in incidence. Enteric diseases are a growing threat to U.S. forces, which must be ready to deploy to austere environments where the risk of exposure to enteropathogens may be significant and where routine prevention efforts may be impractical. In this report, the authors review the recent activities of AFHSC-GEIS partner laboratories in regards to enteric disease surveillance, prevention and response. Each partner identified recent accomplishments, including support for regional networks. AFHSC/GEIS partners also completed a Strengths, Weaknesses, Opportunities and Threats (SWOT survey as part of a landscape analysis of global enteric surveillance efforts. The current strengths of this network include excellent laboratory infrastructure, equipment and personnel that provide the opportunity for high-quality epidemiological studies and test platforms for point-of-care diagnostics. Weaknesses include inconsistent guidance and a splintered reporting system that hampers the comparison of data across regions or longitudinally. The newly chartered Enterics Surveillance Steering Committee (ESSC is intended to provide clear mission guidance, a structured project review process, and central data management and analysis in support of rationally directed enteric disease surveillance efforts.

  9. Structure optimization and simulation analysis of the quartz micromachined gyroscope

    Directory of Open Access Journals (Sweden)

    Xuezhong Wu

    2014-02-01

    Full Text Available Structure optimization and simulation analysis of the quartz micromachined gyroscope are reported in this paper. The relationships between the structure parameters and the frequencies of work mode were analysed by finite element analysis. The structure parameters of the quartz micromachined gyroscope were optimized to reduce the difference between the frequencies of the drive mode and the sense mode. The simulation results were proved by testing the prototype gyroscope, which was fabricated by micro-electromechanical systems (MEMS technology. Therefore, the frequencies of the drive mode and the sense mode can match each other by the structure optimization and simulation analysis of the quartz micromachined gyroscope, which is helpful in the design of the high sensitivity quartz micromachined gyroscope.

  10. Toward optimizing the delivery and use of climate science for natural resource management: lessons learned from recent adaptation efforts in the southwestern U.S.

    Science.gov (United States)

    Enquist, C.

    2014-12-01

    Within the past decade, a wealth of federal, state, and NGO-driven initiatives has emerged across managed landscapes in the United States with the goal of facilitating a coordinated response to rapidly changing climate and environmental conditions. In addition to acquisition and translation of the latest climate science, climate vulnerability assessment and scenario planning at multiple spatial and temporal scales are typically major components of such broad adaptation efforts. Numerous approaches for conducting this work have emerged in recent years and have culminated in general guidance and trainings for resource professionals that are specifically designed to help practitioners face the challenges of climate change. In particular, early engagement of stakeholders across multiple jurisdictions is particularly critical to cultivate buy-in and other enabling conditions for moving the science to on-the-ground action. I report on a suite of adaptation efforts in the southwestern US and interior Rockies, highlighting processes used, actions taken, lessons learned, and recommended next steps to facilitate achieving desired management outcomes. This includes a discussion of current efforts to optimize funding for actionable climate science, formalize science-management collaborations, and facilitate new investments in approaches for strategic climate-informed monitoring and evaluation.

  11. Framework for Multidisciplinary Analysis, Design, and Optimization with High-Fidelity Analysis Tools

    Science.gov (United States)

    Orr, Stanley A.; Narducci, Robert P.

    2009-01-01

    A plan is presented for the development of a high fidelity multidisciplinary optimization process for rotorcraft. The plan formulates individual disciplinary design problems, identifies practical high-fidelity tools and processes that can be incorporated in an automated optimization environment, and establishes statements of the multidisciplinary design problem including objectives, constraints, design variables, and cross-disciplinary dependencies. Five key disciplinary areas are selected in the development plan. These are rotor aerodynamics, rotor structures and dynamics, fuselage aerodynamics, fuselage structures, and propulsion / drive system. Flying qualities and noise are included as ancillary areas. Consistency across engineering disciplines is maintained with a central geometry engine that supports all multidisciplinary analysis. The multidisciplinary optimization process targets the preliminary design cycle where gross elements of the helicopter have been defined. These might include number of rotors and rotor configuration (tandem, coaxial, etc.). It is at this stage that sufficient configuration information is defined to perform high-fidelity analysis. At the same time there is enough design freedom to influence a design. The rotorcraft multidisciplinary optimization tool is built and substantiated throughout its development cycle in a staged approach by incorporating disciplines sequentially.

  12. Pre-Preliminary results from the phase III of the IAEA CRP: optimizing of reactor pressure vessel surveillance programmes and their analysis

    Energy Technology Data Exchange (ETDEWEB)

    Brumovsky, M; Gillemot, F; Kryukov, A; Levit, V

    1994-12-31

    This paper gives preliminary results and some conclusions from Phase III of the IAEA Coordinated Research Programme on ``Optimizing the Reactor Pressure Vessel Surveillance Programmes and their Analyses`` carried out during the last seven years in 15 member states. First analysis concerned: comparison of results from initial, un-irradiated materials condition, comparison of transition temperature shifts (from notch toughness testing) with respect to content of residual (P, Cu) and alloying (Ni) elements, type of material (base and weld metal), irradiation temperature (288 and 265 C), and type of fluence dependence. Special effort has been taken to the analysis of the behaviour of a chosen reference steel. (JRQ). 6 figs., 4 tabs.

  13. Development Optimization and Uncertainty Analysis Methods for Oil and Gas Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Ettehadtavakkol, Amin, E-mail: amin.ettehadtavakkol@ttu.edu [Texas Tech University (United States); Jablonowski, Christopher [Shell Exploration and Production Company (United States); Lake, Larry [University of Texas at Austin (United States)

    2017-04-15

    Uncertainty complicates the development optimization of oil and gas exploration and production projects, but methods have been devised to analyze uncertainty and its impact on optimal decision-making. This paper compares two methods for development optimization and uncertainty analysis: Monte Carlo (MC) simulation and stochastic programming. Two example problems for a gas field development and an oilfield development are solved and discussed to elaborate the advantages and disadvantages of each method. Development optimization involves decisions regarding the configuration of initial capital investment and subsequent operational decisions. Uncertainty analysis involves the quantification of the impact of uncertain parameters on the optimum design concept. The gas field development problem is designed to highlight the differences in the implementation of the two methods and to show that both methods yield the exact same optimum design. The results show that both MC optimization and stochastic programming provide unique benefits, and that the choice of method depends on the goal of the analysis. While the MC method generates more useful information, along with the optimum design configuration, the stochastic programming method is more computationally efficient in determining the optimal solution. Reservoirs comprise multiple compartments and layers with multiphase flow of oil, water, and gas. We present a workflow for development optimization under uncertainty for these reservoirs, and solve an example on the design optimization of a multicompartment, multilayer oilfield development.

  14. Development Optimization and Uncertainty Analysis Methods for Oil and Gas Reservoirs

    International Nuclear Information System (INIS)

    Ettehadtavakkol, Amin; Jablonowski, Christopher; Lake, Larry

    2017-01-01

    Uncertainty complicates the development optimization of oil and gas exploration and production projects, but methods have been devised to analyze uncertainty and its impact on optimal decision-making. This paper compares two methods for development optimization and uncertainty analysis: Monte Carlo (MC) simulation and stochastic programming. Two example problems for a gas field development and an oilfield development are solved and discussed to elaborate the advantages and disadvantages of each method. Development optimization involves decisions regarding the configuration of initial capital investment and subsequent operational decisions. Uncertainty analysis involves the quantification of the impact of uncertain parameters on the optimum design concept. The gas field development problem is designed to highlight the differences in the implementation of the two methods and to show that both methods yield the exact same optimum design. The results show that both MC optimization and stochastic programming provide unique benefits, and that the choice of method depends on the goal of the analysis. While the MC method generates more useful information, along with the optimum design configuration, the stochastic programming method is more computationally efficient in determining the optimal solution. Reservoirs comprise multiple compartments and layers with multiphase flow of oil, water, and gas. We present a workflow for development optimization under uncertainty for these reservoirs, and solve an example on the design optimization of a multicompartment, multilayer oilfield development.

  15. Multiproduct Multiperiod Newsvendor Problem with Dynamic Market Efforts

    Directory of Open Access Journals (Sweden)

    Jianmai Shi

    2016-01-01

    Full Text Available We study a multiperiod multiproduct production planning problem where the production capacity and the marketing effort on demand are both considered. The accumulative impact of marketing effort on demand is captured by the Nerlove and Arrow (N-A advertising model. The problem is formulated as a discrete-time, finite-horizon dynamic optimization problem, which can be viewed as an extension to the classic newsvendor problem by integrating with the N-A model. A Lagrangian relaxation based solution approach is developed to solve the problem, in which the subgradient algorithm is used to find an upper bound of the solution and a feasibility heuristic algorithm is proposed to search for a feasible lower bound. Twelve kinds of instances with different problem size involving up to 50 products and 15 planning periods are randomly generated and used to test the Lagrangian heuristic algorithm. Computational results show that the proposed approach can obtain near optimal solutions for all the instances in very short CPU time, which is less than 90 seconds even for the largest instance.

  16. Fixed point theory, variational analysis, and optimization

    CERN Document Server

    Al-Mezel, Saleh Abdullah R; Ansari, Qamrul Hasan

    2015-01-01

    ""There is a real need for this book. It is useful for people who work in areas of nonlinear analysis, optimization theory, variational inequalities, and mathematical economics.""-Nan-Jing Huang, Sichuan University, Chengdu, People's Republic of China

  17. Methodological framework for economical and controllable design of heat exchanger networks: Steady-state analysis, dynamic simulation, and optimization

    International Nuclear Information System (INIS)

    Masoud, Ibrahim T.; Abdel-Jabbar, Nabil; Qasim, Muhammad; Chebbi, Rachid

    2016-01-01

    Highlights: • HEN total annualized cost, heat recovery, and controllability are considered in the framework. • Steady-state and dynamic simulations are performed. • Effect of bypass on total annualized cost and controllability is reported. • Optimum bypass fractions are found from closed and open-loop efforts. - Abstract: The problem of interaction between economic design and control system design of heat exchanger networks (HENs) is addressed in this work. The controllability issues are incorporated in the classical design of HENs. A new methodological framework is proposed to account for both economics and controllability of HENs. Two classical design methods are employed, namely, Pinch and superstructure designs. Controllability measures such as relative gain array (RGA) and singular value decomposition (SVD) are used. The proposed framework also presents a bypass placement strategy for optimal control of the designed network. A case study is used to test the applicability of the framework and to assess both economics and controllability. The results indicate that the superstructure design is more economical and controllable compared to the Pinch design. The controllability of the designed HEN is evaluated using Aspen-HYSYS closed-loop dynamic simulator. In addition, a sensitivity analysis is performed to study the effect of bypass fractions on the total annualized cost and controllability of the designed HEN. The analysis shows that increasing any bypass fraction increases the total annualized cost. However, the trend with the total annualized cost was not observed with respect to the control effort manifested by minimizing the integral of the squared errors (ISE) between the controlled stream temperatures and their targets (set-points). An optimal ISE point is found at a certain bypass fraction, which does not correspond to the minimal total annualized cost. The bypass fractions are validated via open-loop simulation and the additional cooling and

  18. Computer-Aided Communication Satellite System Analysis and Optimization.

    Science.gov (United States)

    Stagl, Thomas W.; And Others

    Various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. The rationale for selecting General Dynamics/Convair's Satellite Telecommunication Analysis and Modeling Program (STAMP) in modified form to aid in the system costing and sensitivity analysis work in the Program on…

  19. Optimal depth-based regional frequency analysis

    Directory of Open Access Journals (Sweden)

    H. Wazneh

    2013-06-01

    Full Text Available Classical methods of regional frequency analysis (RFA of hydrological variables face two drawbacks: (1 the restriction to a particular region which can lead to a loss of some information and (2 the definition of a region that generates a border effect. To reduce the impact of these drawbacks on regional modeling performance, an iterative method was proposed recently, based on the statistical notion of the depth function and a weight function φ. This depth-based RFA (DBRFA approach was shown to be superior to traditional approaches in terms of flexibility, generality and performance. The main difficulty of the DBRFA approach is the optimal choice of the weight function ϕ (e.g., φ minimizing estimation errors. In order to avoid a subjective choice and naïve selection procedures of φ, the aim of the present paper is to propose an algorithm-based procedure to optimize the DBRFA and automate the choice of ϕ according to objective performance criteria. This procedure is applied to estimate flood quantiles in three different regions in North America. One of the findings from the application is that the optimal weight function depends on the considered region and can also quantify the region's homogeneity. By comparing the DBRFA to the canonical correlation analysis (CCA method, results show that the DBRFA approach leads to better performances both in terms of relative bias and mean square error.

  20. Time Optimal Reachability Analysis Using Swarm Verification

    DEFF Research Database (Denmark)

    Zhang, Zhengkui; Nielsen, Brian; Larsen, Kim Guldstrand

    2016-01-01

    Time optimal reachability analysis employs model-checking to compute goal states that can be reached from an initial state with a minimal accumulated time duration. The model-checker may produce a corresponding diagnostic trace which can be interpreted as a feasible schedule for many scheduling...... and planning problems, response time optimization etc. We propose swarm verification to accelerate time optimal reachability using the real-time model-checker Uppaal. In swarm verification, a large number of model checker instances execute in parallel on a computer cluster using different, typically randomized...... search strategies. We develop four swarm algorithms and evaluate them with four models in terms scalability, and time- and memory consumption. Three of these cooperate by exchanging costs of intermediate solutions to prune the search using a branch-and-bound approach. Our results show that swarm...

  1. Analysis and Optimization of Building Energy Consumption

    Science.gov (United States)

    Chuah, Jun Wei

    Energy is one of the most important resources required by modern human society. In 2010, energy expenditures represented 10% of global gross domestic product (GDP). By 2035, global energy consumption is expected to increase by more than 50% from current levels. The increased pace of global energy consumption leads to significant environmental and socioeconomic issues: (i) carbon emissions, from the burning of fossil fuels for energy, contribute to global warming, and (ii) increased energy expenditures lead to reduced standard of living. Efficient use of energy, through energy conservation measures, is an important step toward mitigating these effects. Residential and commercial buildings represent a prime target for energy conservation, comprising 21% of global energy consumption and 40% of the total energy consumption in the United States. This thesis describes techniques for the analysis and optimization of building energy consumption. The thesis focuses on building retrofits and building energy simulation as key areas in building energy optimization and analysis. The thesis first discusses and evaluates building-level renewable energy generation as a solution toward building energy optimization. The thesis next describes a novel heating system, called localized heating. Under localized heating, building occupants are heated individually by directed radiant heaters, resulting in a considerably reduced heated space and significant heating energy savings. To support localized heating, a minimally-intrusive indoor occupant positioning system is described. The thesis then discusses occupant-level sensing (OLS) as the next frontier in building energy optimization. OLS captures the exact environmental conditions faced by each building occupant, using sensors that are carried by all building occupants. The information provided by OLS enables fine-grained optimization for unprecedented levels of energy efficiency and occupant comfort. The thesis also describes a retrofit

  2. Cluster analysis for portfolio optimization

    OpenAIRE

    Vincenzo Tola; Fabrizio Lillo; Mauro Gallegati; Rosario N. Mantegna

    2005-01-01

    We consider the problem of the statistical uncertainty of the correlation matrix in the optimization of a financial portfolio. We show that the use of clustering algorithms can improve the reliability of the portfolio in terms of the ratio between predicted and realized risk. Bootstrap analysis indicates that this improvement is obtained in a wide range of the parameters N (number of assets) and T (investment horizon). The predicted and realized risk level and the relative portfolio compositi...

  3. Optimizing oil spill cleanup efforts: A tactical approach and evaluation framework.

    Science.gov (United States)

    Grubesic, Tony H; Wei, Ran; Nelson, Jake

    2017-12-15

    Although anthropogenic oil spills vary in size, duration and severity, their broad impacts on complex social, economic and ecological systems can be significant. Questions pertaining to the operational challenges associated with the tactical allocation of human resources, cleanup equipment and supplies to areas impacted by a large spill are particularly salient when developing mitigation strategies for extreme oiling events. The purpose of this paper is to illustrate the application of advanced oil spill modeling techniques in combination with a developed mathematical model to spatially optimize the allocation of response crews and equipment for cleaning up an offshore oil spill. The results suggest that the detailed simulations and optimization model are a good first step in allowing both communities and emergency responders to proactively plan for extreme oiling events and develop response strategies that minimize the impacts of spills. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Analysis and optimization of kinematic pair force in control rod drive mechanism

    International Nuclear Information System (INIS)

    Sun Zhenguo; Liu Sen; Ran Xiaobing; Dai Changnian; Li Yuezhong

    2015-01-01

    Function expressions of kinematic pair force with latch dimensions, friction coefficient, link angle and external load was obtained by theoretical analysis, and the expression was verified by the motion analysis software. Key parameters of kinematic pair were confirmed, and their effect trends with force of parts were obtained. They show that the available method of kinematic pair optimization is increasing the space of latch holes. Using the motion analysis software, the forces of parts before and after optimization was compared. The result shows that the forces of parts were improved after the optimization. (authors)

  5. Respiratory effort from the photoplethysmogram.

    Science.gov (United States)

    Addison, Paul S

    2017-03-01

    The potential for a simple, non-invasive measure of respiratory effort based on the pulse oximeter signal - the photoplethysmogram or 'pleth' - was investigated in a pilot study. Several parameters were developed based on a variety of manifestations of respiratory effort in the signal, including modulation changes in amplitude, baseline, frequency and pulse transit times, as well as distinct baseline signal shifts. Thirteen candidate parameters were investigated using data from healthy volunteers. Each volunteer underwent a series of controlled respiratory effort maneuvers at various set flow resistances and respiratory rates. Six oximeter probes were tested at various body sites. In all, over three thousand pleth-based effort-airway pressure (EP) curves were generated across the various airway constrictions, respiratory efforts, respiratory rates, subjects, probe sites, and the candidate parameters considered. Regression analysis was performed to determine the existence of positive monotonic relationships between the respiratory effort parameters and resulting airway pressures. Six of the candidate parameters investigated exhibited a distinct positive relationship (poximeter probe and an ECG (P2E-Effort) and the other using two pulse oximeter probes placed at different peripheral body sites (P2-Effort); and baseline shifts in heart rate, (BL-HR-Effort). In conclusion, a clear monotonic relationship was found between several pleth-based parameters and imposed respiratory loadings at the mouth across a range of respiratory rates and flow constrictions. The results suggest that the pleth may provide a measure of changing upper airway dynamics indicative of the effort to breathe. Copyright © 2017 The Author. Published by Elsevier Ltd.. All rights reserved.

  6. Optimizing Biorefinery Design and Operations via Linear Programming Models

    Energy Technology Data Exchange (ETDEWEB)

    Talmadge, Michael; Batan, Liaw; Lamers, Patrick; Hartley, Damon; Biddy, Mary; Tao, Ling; Tan, Eric

    2017-03-28

    The ability to assess and optimize economics of biomass resource utilization for the production of fuels, chemicals and power is essential for the ultimate success of a bioenergy industry. The team of authors, consisting of members from the National Renewable Energy Laboratory (NREL) and the Idaho National Laboratory (INL), has developed simple biorefinery linear programming (LP) models to enable the optimization of theoretical or existing biorefineries. The goal of this analysis is to demonstrate how such models can benefit the developing biorefining industry. It focuses on a theoretical multi-pathway, thermochemical biorefinery configuration and demonstrates how the biorefinery can use LP models for operations planning and optimization in comparable ways to the petroleum refining industry. Using LP modeling tools developed under U.S. Department of Energy's Bioenergy Technologies Office (DOE-BETO) funded efforts, the authors investigate optimization challenges for the theoretical biorefineries such as (1) optimal feedstock slate based on available biomass and prices, (2) breakeven price analysis for available feedstocks, (3) impact analysis for changes in feedstock costs and product prices, (4) optimal biorefinery operations during unit shutdowns / turnarounds, and (5) incentives for increased processing capacity. These biorefinery examples are comparable to crude oil purchasing and operational optimization studies that petroleum refiners perform routinely using LPs and other optimization models. It is important to note that the analyses presented in this article are strictly theoretical and they are not based on current energy market prices. The pricing structure assigned for this demonstrative analysis is consistent with $4 per gallon gasoline, which clearly assumes an economic environment that would favor the construction and operation of biorefineries. The analysis approach and examples provide valuable insights into the usefulness of analysis tools for

  7. Joint pricing, inventory, and preservation decisions for deteriorating items with stochastic demand and promotional efforts

    Science.gov (United States)

    Soni, Hardik N.; Chauhan, Ashaba D.

    2018-03-01

    This study models a joint pricing, inventory, and preservation decision-making problem for deteriorating items subject to stochastic demand and promotional effort. The generalized price-dependent stochastic demand, time proportional deterioration, and partial backlogging rates are used to model the inventory system. The objective is to find the optimal pricing, replenishment, and preservation technology investment strategies while maximizing the total profit per unit time. Based on the partial backlogging and lost sale cases, we first deduce the criterion for optimal replenishment schedules for any given price and technology investment cost. Second, we show that, respectively, total profit per time unit is concave function of price and preservation technology cost. At the end, some numerical examples and the results of a sensitivity analysis are used to illustrate the features of the proposed model.

  8. Trafficability Analysis at Traffic Crossing and Parameters Optimization Based on Particle Swarm Optimization Method

    Directory of Open Access Journals (Sweden)

    Bin He

    2014-01-01

    Full Text Available In city traffic, it is important to improve transportation efficiency and the spacing of platoon should be shortened when crossing the street. The best method to deal with this problem is automatic control of vehicles. In this paper, a mathematical model is established for the platoon’s longitudinal movement. A systematic analysis of longitudinal control law is presented for the platoon of vehicles. However, the parameter calibration for the platoon model is relatively difficult because the platoon model is complex and the parameters are coupled with each other. In this paper, the particle swarm optimization method is introduced to effectively optimize the parameters of platoon. The proposed method effectively finds the optimal parameters based on simulations and makes the spacing of platoon shorter.

  9. Dissociating variability and effort as determinants of coordination.

    Directory of Open Access Journals (Sweden)

    Ian O'Sullivan

    2009-04-01

    Full Text Available When coordinating movements, the nervous system often has to decide how to distribute work across a number of redundant effectors. Here, we show that humans solve this problem by trying to minimize both the variability of motor output and the effort involved. In previous studies that investigated the temporal shape of movements, these two selective pressures, despite having very different theoretical implications, could not be distinguished; because noise in the motor system increases with the motor commands, minimization of effort or variability leads to very similar predictions. When multiple effectors with different noise and effort characteristics have to be combined, however, these two cost terms can be dissociated. Here, we measure the importance of variability and effort in coordination by studying how humans share force production between two fingers. To capture variability, we identified the coefficient of variation of the index and little fingers. For effort, we used the sum of squared forces and the sum of squared forces normalized by the maximum strength of each effector. These terms were then used to predict the optimal force distribution for a task in which participants had to produce a target total force of 4-16 N, by pressing onto two isometric transducers using different combinations of fingers. By comparing the predicted distribution across fingers to the actual distribution chosen by participants, we were able to estimate the relative importance of variability and effort of 1:7, with the unnormalized effort being most important. Our results indicate that the nervous system uses multi-effector redundancy to minimize both the variability of the produced output and effort, although effort costs clearly outweighed variability costs.

  10. Optimizing Nuclear Reaction Analysis (NRA) using Bayesian Experimental Design

    International Nuclear Information System (INIS)

    Toussaint, Udo von; Schwarz-Selinger, Thomas; Gori, Silvio

    2008-01-01

    Nuclear Reaction Analysis with 3 He holds the promise to measure Deuterium depth profiles up to large depths. However, the extraction of the depth profile from the measured data is an ill-posed inversion problem. Here we demonstrate how Bayesian Experimental Design can be used to optimize the number of measurements as well as the measurement energies to maximize the information gain. Comparison of the inversion properties of the optimized design with standard settings reveals huge possible gains. Application of the posterior sampling method allows to optimize the experimental settings interactively during the measurement process.

  11. Damping layout optimization for ship's cabin noise reduction based on statistical energy analysis

    Directory of Open Access Journals (Sweden)

    WU Weiguo

    2017-08-01

    Full Text Available An optimization analysis study concerning the damping control of ship's cabin noise was carried out in order to improve the effect and reduce the weight of damping. Based on the Statistical Energy Analysis (SEA method, a theoretical deduction and numerical analysis of the first-order sensitivity analysis of the A-weighted sound pressure level concerning the damping loss factor of the subsystem were carried out. On this basis, a mathematical optimization model was proposed and an optimization program developed. Next, the secondary development of VA One software was implemented through the use of MATLAB, while the cabin noise damping control layout optimization system was established. Finally, the optimization model of the ship was constructed and numerical experiments of damping control optimization conducted. The damping installation region was divided into five parts with different damping thicknesses. The total weight of damping was set as an objective function and the A-weighted sound pressure level of the target cabin was set as a constraint condition. The best damping thickness was obtained through the optimization program, and the total damping weight was reduced by 60.4%. The results show that the damping noise reduction effect of unit weight is significantly improved through the optimization method. This research successfully solves the installation position and thickness selection problems in the acoustic design of damping control, providing a reliable analysis method and guidance for the design.

  12. ACT Payload Shroud Structural Concept Analysis and Optimization

    Science.gov (United States)

    Zalewski, Bart B.; Bednarcyk, Brett A.

    2010-01-01

    Aerospace structural applications demand a weight efficient design to perform in a cost effective manner. This is particularly true for launch vehicle structures, where weight is the dominant design driver. The design process typically requires many iterations to ensure that a satisfactory minimum weight has been obtained. Although metallic structures can be weight efficient, composite structures can provide additional weight savings due to their lower density and additional design flexibility. This work presents structural analysis and weight optimization of a composite payload shroud for NASA s Ares V heavy lift vehicle. Two concepts, which were previously determined to be efficient for such a structure are evaluated: a hat stiffened/corrugated panel and a fiber reinforced foam sandwich panel. A composite structural optimization code, HyperSizer, is used to optimize the panel geometry, composite material ply orientations, and sandwich core material. HyperSizer enables an efficient evaluation of thousands of potential designs versus multiple strength and stability-based failure criteria across multiple load cases. HyperSizer sizing process uses a global finite element model to obtain element forces, which are statistically processed to arrive at panel-level design-to loads. These loads are then used to analyze each candidate panel design. A near optimum design is selected as the one with the lowest weight that also provides all positive margins of safety. The stiffness of each newly sized panel or beam component is taken into account in the subsequent finite element analysis. Iteration of analysis/optimization is performed to ensure a converged design. Sizing results for the hat stiffened panel concept and the fiber reinforced foam sandwich concept are presented.

  13. Optimizing human activity patterns using global sensitivity analysis.

    Science.gov (United States)

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  14. An analytical guidance law of planetary landing mission by minimizing the control effort expenditure

    International Nuclear Information System (INIS)

    Afshari, Hamed Hossein; Novinzadeh, Alireza Basohbat; Roshanian, Jafar

    2009-01-01

    An optimal trajectory design of a module for the planetary landing problem is achieved by minimizing the control effort expenditure. Using the calculus of variations theorem, the control variable is expressed as a function of costate variables, and the problem is converted into a two-point boundary-value problem. To solve this problem, the performance measure is approximated by employing a trigonometric series and subsequently, the optimal control and state trajectories are determined. To validate the accuracy of the proposed solution, a numerical method of the steepest descent is utilized. The main objective of this paper is to present a novel analytic guidance law of the planetary landing mission by optimizing the control effort expenditure. Finally, an example of a lunar landing mission is demonstrated to examine the results of this solution in practical situations

  15. Optimal river monitoring network using optimal partition analysis: a case study of Hun River, Northeast China.

    Science.gov (United States)

    Wang, Hui; Liu, Chunyue; Rong, Luge; Wang, Xiaoxu; Sun, Lina; Luo, Qing; Wu, Hao

    2018-01-09

    River monitoring networks play an important role in water environmental management and assessment, and it is critical to develop an appropriate method to optimize the monitoring network. In this study, an effective method was proposed based on the attainment rate of National Grade III water quality, optimal partition analysis and Euclidean distance, and Hun River was taken as a method validation case. There were 7 sampling sites in the monitoring network of the Hun River, and 17 monitoring items were analyzed once a month during January 2009 to December 2010. The results showed that the main monitoring items in the surface water of Hun River were ammonia nitrogen (NH 4 + -N), chemical oxygen demand, and biochemical oxygen demand. After optimization, the required number of monitoring sites was reduced from seven to three, and 57% of the cost was saved. In addition, there were no significant differences between non-optimized and optimized monitoring networks, and the optimized monitoring networks could correctly represent the original monitoring network. The duplicate setting degree of monitoring sites decreased after optimization, and the rationality of the monitoring network was improved. Therefore, the optimal method was identified as feasible, efficient, and economic.

  16. Pocket money and child effort at school

    OpenAIRE

    François-Charles Wolff; Christine Barnet-Verzat

    2008-01-01

    In this paper, we study the relationship between the provision of parental pocket and the level of effort undertaken by the child at school. Under altruism, an increased amount of parental transfer should reduce the child's effort. Our empirical analysis is based on a French data set including about 1,400 parent-child pairs. We find that children do not undertake less effort when their parents are more generous.

  17. Multidisciplinary Modeling Software for Analysis, Design, and Optimization of HRRLS Vehicles

    Science.gov (United States)

    Spradley, Lawrence W.; Lohner, Rainald; Hunt, James L.

    2011-01-01

    The concept for Highly Reliable Reusable Launch Systems (HRRLS) under the NASA Hypersonics project is a two-stage-to-orbit, horizontal-take-off / horizontal-landing, (HTHL) architecture with an air-breathing first stage. The first stage vehicle is a slender body with an air-breathing propulsion system that is highly integrated with the airframe. The light weight slender body will deflect significantly during flight. This global deflection affects the flow over the vehicle and into the engine and thus the loads and moments on the vehicle. High-fidelity multi-disciplinary analyses that accounts for these fluid-structures-thermal interactions are required to accurately predict the vehicle loads and resultant response. These predictions of vehicle response to multi physics loads, calculated with fluid-structural-thermal interaction, are required in order to optimize the vehicle design over its full operating range. This contract with ResearchSouth addresses one of the primary objectives of the Vehicle Technology Integration (VTI) discipline: the development of high-fidelity multi-disciplinary analysis and optimization methods and tools for HRRLS vehicles. The primary goal of this effort is the development of an integrated software system that can be used for full-vehicle optimization. This goal was accomplished by: 1) integrating the master code, FEMAP, into the multidiscipline software network to direct the coupling to assure accurate fluid-structure-thermal interaction solutions; 2) loosely-coupling the Euler flow solver FEFLO to the available and proven aeroelasticity and large deformation (FEAP) code; 3) providing a coupled Euler-boundary layer capability for rapid viscous flow simulation; 4) developing and implementing improved Euler/RANS algorithms into the FEFLO CFD code to provide accurate shock capturing, skin friction, and heat-transfer predictions for HRRLS vehicles in hypersonic flow, 5) performing a Reynolds-averaged Navier-Stokes computation on an HRRLS

  18. On the relation between flexibility analysis and robust optimization for linear systems

    KAUST Repository

    Zhang, Qi

    2016-03-05

    Flexibility analysis and robust optimization are two approaches to solving optimization problems under uncertainty that share some fundamental concepts, such as the use of polyhedral uncertainty sets and the worst-case approach to guarantee feasibility. The connection between these two approaches has not been sufficiently acknowledged and examined in the literature. In this context, the contributions of this work are fourfold: (1) a comparison between flexibility analysis and robust optimization from a historical perspective is presented; (2) for linear systems, new formulations for the three classical flexibility analysis problems—flexibility test, flexibility index, and design under uncertainty—based on duality theory and the affinely adjustable robust optimization (AARO) approach are proposed; (3) the AARO approach is shown to be generally more restrictive such that it may lead to overly conservative solutions; (4) numerical examples show the improved computational performance from the proposed formulations compared to the traditional flexibility analysis models. © 2016 American Institute of Chemical Engineers AIChE J, 62: 3109–3123, 2016

  19. PROBABILISTIC RISK ANALYSIS OF REMEDIATION EFFORTS IN NAPL SITES

    Science.gov (United States)

    Fernandez-Garcia, D.; de Vries, L.; Pool, M.; Sapriza, G.; Sanchez-Vila, X.; Bolster, D.; Tartakovsky, D. M.

    2009-12-01

    The release of non-aqueous phase liquids (NAPLs) such as petroleum hydrocarbons and chlorinated solvents in the subsurface is a severe source of groundwater and vapor contamination. Because these liquids are essentially immiscible due to low solubility, these contaminants get slowly dissolved in groundwater and/or volatilized in the vadoze zone threatening the environment and public health over a long period. Many remediation technologies and strategies have been developed in the last decades for restoring the water quality properties of these contaminated sites. The failure of an on-site treatment technology application is often due to the unnoticed presence of dissolved NAPL entrapped in low permeability areas (heterogeneity) and/or the remaining of substantial amounts of pure phase after remediation efforts. Full understanding of the impact of remediation efforts is complicated due to the role of many interlink physical and biochemical processes taking place through several potential pathways of exposure to multiple receptors in a highly unknown heterogeneous environment. Due to these difficulties, the design of remediation strategies and definition of remediation endpoints have been traditionally determined without quantifying the risk associated with the failure of such efforts. We conduct a probabilistic risk assessment of the likelihood of success of an on-site NAPL treatment technology that easily integrates all aspects of the problem (causes, pathways, and receptors). Thus, the methodology allows combining the probability of failure of a remediation effort due to multiple causes, each one associated to several pathways and receptors.

  20. Analysis and optimization of blood-testing procedures.

    NARCIS (Netherlands)

    Bar-Lev, S.K.; Boxma, O.J.; Perry, D.; Vastazos, L.P.

    2017-01-01

    This paper is devoted to the performance analysis and optimization of blood testing procedures. We present a queueing model of two queues in series, representing the two stages of a blood-testing procedure. Service (testing) in stage 1 is performed in batches, whereas it is done individually in

  1. Optimal Design, Reliability And Sensitivity Analysis Of Foundation Plate

    Directory of Open Access Journals (Sweden)

    Tvrdá Katarína

    2015-12-01

    Full Text Available This paper deals with the optimal design of thickness of a plate rested on Winkler’s foundation. First order method was used for the optimization, while maintaining different restrictive conditions. The aim is to obtain a minimum volume of the foundation plate. At the end some probabilistic and safety analysis of the deflection of the foundation using LHS Monte Carlo method are presented.

  2. Joint optimization of algorithmic suites for EEG analysis.

    Science.gov (United States)

    Santana, Eder; Brockmeier, Austin J; Principe, Jose C

    2014-01-01

    Electroencephalogram (EEG) data analysis algorithms consist of multiple processing steps each with a number of free parameters. A joint optimization methodology can be used as a wrapper to fine-tune these parameters for the patient or application. This approach is inspired by deep learning neural network models, but differs because the processing layers for EEG are heterogeneous with different approaches used for processing space and time. Nonetheless, we treat the processing stages as a neural network and apply backpropagation to jointly optimize the parameters. This approach outperforms previous results on the BCI Competition II - dataset IV; additionally, it outperforms the common spatial patterns (CSP) algorithm on the BCI Competition III dataset IV. In addition, the optimized parameters in the architecture are still interpretable.

  3. The Effort Paradox: Effort Is Both Costly and Valued.

    Science.gov (United States)

    Inzlicht, Michael; Shenhav, Amitai; Olivola, Christopher Y

    2018-04-01

    According to prominent models in cognitive psychology, neuroscience, and economics, effort (be it physical or mental) is costly: when given a choice, humans and non-human animals alike tend to avoid effort. Here, we suggest that the opposite is also true and review extensive evidence that effort can also add value. Not only can the same outcomes be more rewarding if we apply more (not less) effort, sometimes we select options precisely because they require effort. Given the increasing recognition of effort's role in motivation, cognitive control, and value-based decision-making, considering this neglected side of effort will not only improve formal computational models, but also provide clues about how to promote sustained mental effort across time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Multiobjective Optimization of ELID Grinding Process Using Grey Relational Analysis Coupled with Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    S. Prabhu

    2014-06-01

    Full Text Available Carbon nanotube (CNT mixed grinding wheel has been used in the electrolytic in-process dressing (ELID grinding process to analyze the surface characteristics of AISI D2 Tool steel material. CNT grinding wheel is having an excellent thermal conductivity and good mechanical property which is used to improve the surface finish of the work piece. The multiobjective optimization of grey relational analysis coupled with principal component analysis has been used to optimize the process parameters of ELID grinding process. Based on the Taguchi design of experiments, an L9 orthogonal array table was chosen for the experiments. The confirmation experiment verifies the proposed that grey-based Taguchi method has the ability to find out the optimal process parameters with multiple quality characteristics of surface roughness and metal removal rate. Analysis of variance (ANOVA has been used to verify and validate the model. Empirical model for the prediction of output parameters has been developed using regression analysis and the results were compared for with and without using CNT grinding wheel in ELID grinding process.

  5. Global Optimization using Interval Analysis : Interval Optimization for Aerospace Applications

    NARCIS (Netherlands)

    Van Kampen, E.

    2010-01-01

    Optimization is an important element in aerospace related research. It is encountered for example in trajectory optimization problems, such as: satellite formation flying, spacecraft re-entry optimization and airport approach and departure optimization; in control optimization, for example in

  6. Experiments Planning, Analysis, and Optimization

    CERN Document Server

    Wu, C F Jeff

    2011-01-01

    Praise for the First Edition: "If you . . . want an up-to-date, definitive reference written by authors who have contributed much to this field, then this book is an essential addition to your library."-Journal of the American Statistical Association Fully updated to reflect the major progress in the use of statistically designed experiments for product and process improvement, Experiments, Second Edition introduces some of the newest discoveries-and sheds further light on existing ones-on the design and analysis of experiments and their applications in system optimization, robustness, and tre

  7. Sensitivity Analysis for Design Optimization Integrated Software Tools, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposed project is to provide a new set of sensitivity analysis theory and codes, the Sensitivity Analysis for Design Optimization Integrated...

  8. Analysis of a Fishery Model with two competing prey species in the presence of a predator species for Optimal Harvesting

    Science.gov (United States)

    Sutimin; Khabibah, Siti; Munawwaroh, Dita Anis

    2018-02-01

    A harvesting fishery model is proposed to analyze the effects of the presence of red devil fish population, as a predator in an ecosystem. In this paper, we consider an ecological model of three species by taking into account two competing species and presence of a predator (red devil), the third species, which incorporates the harvesting efforts of each fish species. The stability of the dynamical system is discussed and the existence of biological and bionomic equilibrium is examined. The optimal harvest policy is studied and the solution is derived in the equilibrium case applying Pontryagin's maximal principle. The simulation results is presented to simulate the dynamical behavior of the model and show that the optimal equilibrium solution is globally asymptotically stable. The results show that the optimal harvesting effort is obtained regarding to bionomic and biological equilibrium.

  9. Economic analysis of alternatives for optimizing energy use in manufacturing companies

    International Nuclear Information System (INIS)

    Méndez-Piñero, Mayra Ivelisse; Colón-Vázquez, Melitza

    2013-01-01

    The manufacturing companies are one of the main consumers of energy. The increment in global warming and the instability in the petroleum oil market have motivated companies to find alternatives to reduce energy use. In the academic literature several researchers have demonstrated that optimization models can be successfully used to reduce energy use. This research presents the use of an optimization model to identify feasible economic alternatives to reduce energy use. The economic analysis methods used were the payback and the internal rate of return. The optimization model developed in this research was applied and validated using an electronic manufacturing company case study. The results demonstrate that the main variables affecting the economic feasibility of the alternatives are the economic analysis method and the initial implementation costs. Several scenarios were analyzed and the best results show that the manufacturing company could save up to $78,000 in three years if the recommendations based on the optimization model results are implemented. - Highlights: • Evaluate top consumers of energy in manufacturing: A/C, compressed air, and lighting • Economic analysis of alternatives to optimize energy used in manufacturing • Comparison of payback method and internal rate of return method with real data • Results demonstrate that the company could generate savings in energy use

  10. The costs of parental care: a meta-analysis of the trade-off between parental effort and survival in birds.

    Science.gov (United States)

    Santos, E S A; Nakagawa, S

    2012-09-01

    A fundamental premise of life-history theory is that organisms that increase current reproductive investment suffer increased mortality. Possibly the most studied life-history phenotypic relationship is the trade-off between parental effort and survival. However, evidence supporting this trade-off is equivocal. Here, we conducted a meta-analysis to test the generality of this tenet. Using experimental studies that manipulated parental effort in birds, we show that (i) the effect of parental effort on survival was similar across species regardless of phylogeny; (ii) individuals that experienced reduced parental effort had similar survival probabilities than control individuals, regardless of sex; and (iii) males that experienced increased parental effort were less likely to survive than control males, whereas females that experienced increased effort were just as likely to survive as control females. Our results suggest that the trade-off between parental effort and survival is more complex than previously assumed. Finally, our study provides recommendations of unexplored avenues of future research into life-history trade-offs. © 2012 The Authors. Journal of Evolutionary Biology © 2012 European Society For Evolutionary Biology.

  11. Analysis and optimization of a camber morphing wing model

    Directory of Open Access Journals (Sweden)

    Bing Li

    2016-09-01

    Full Text Available This article proposes a camber morphing wing model that can continuously change its camber. A mathematical model is proposed and a kinematic simulation is performed to verify the wing’s ability to change camber. An aerodynamic model is used to test its aerodynamic characteristics. Some important aerodynamic analyses are performed. A comparative analysis is conducted to explore the relationships between aerodynamic parameters, the rotation angle of the trailing edge, and the angle of attack. An improved artificial fish swarm optimization algorithm is proposed, referred to as the weighted adaptive artificial fish-swarm with embedded Hooke–Jeeves search method. Some comparison tests are used to test the performance of the improved optimization algorithm. Finally, the proposed optimization algorithm is used to optimize the proposed camber morphing wing model.

  12. Optimization and industry new frontiers

    CERN Document Server

    Korotkikh, Victor

    2003-01-01

    Optimization from Human Genes to Cutting Edge Technologies The challenges faced by industry today are so complex that they can only be solved through the help and participation of optimization ex­ perts. For example, many industries in e-commerce, finance, medicine, and engineering, face several computational challenges due to the mas­ sive data sets that arise in their applications. Some of the challenges include, extended memory algorithms and data structures, new program­ ming environments, software systems, cryptographic protocols, storage devices, data compression, mathematical and statistical methods for knowledge mining, and information visualization. With advances in computer and information systems technologies, and many interdisci­ plinary efforts, many of the "data avalanche challenges" are beginning to be addressed. Optimization is the most crucial component in these efforts. Nowadays, the main task of optimization is to investigate the cutting edge frontiers of these technologies and systems ...

  13. Truss topology optimization with simultaneous analysis and design

    Science.gov (United States)

    Sankaranarayanan, S.; Haftka, Raphael T.; Kapania, Rakesh K.

    1992-01-01

    Strategies for topology optimization of trusses for minimum weight subject to stress and displacement constraints by Simultaneous Analysis and Design (SAND) are considered. The ground structure approach is used. A penalty function formulation of SAND is compared with an augmented Lagrangian formulation. The efficiency of SAND in handling combinations of general constraints is tested. A strategy for obtaining an optimal topology by minimizing the compliance of the truss is compared with a direct weight minimization solution to satisfy stress and displacement constraints. It is shown that for some problems, starting from the ground structure and using SAND is better than starting from a minimum compliance topology design and optimizing only the cross sections for minimum weight under stress and displacement constraints. A member elimination strategy to save CPU time is discussed.

  14. Three-dimensional optimization and sensitivity analysis of dental implant thread parameters using finite element analysis.

    Science.gov (United States)

    Geramizadeh, Maryam; Katoozian, Hamidreza; Amid, Reza; Kadkhodazadeh, Mahdi

    2018-04-01

    This study aimed to optimize the thread depth and pitch of a recently designed dental implant to provide uniform stress distribution by means of a response surface optimization method available in finite element (FE) software. The sensitivity of simulation to different mechanical parameters was also evaluated. A three-dimensional model of a tapered dental implant with micro-threads in the upper area and V-shaped threads in the rest of the body was modeled and analyzed using finite element analysis (FEA). An axial load of 100 N was applied to the top of the implants. The model was optimized for thread depth and pitch to determine the optimal stress distribution. In this analysis, micro-threads had 0.25 to 0.3 mm depth and 0.27 to 0.33 mm pitch, and V-shaped threads had 0.405 to 0.495 mm depth and 0.66 to 0.8 mm pitch. The optimized depth and pitch were 0.307 and 0.286 mm for micro-threads and 0.405 and 0.808 mm for V-shaped threads, respectively. In this design, the most effective parameters on stress distribution were the depth and pitch of the micro-threads based on sensitivity analysis results. Based on the results of this study, the optimal implant design has micro-threads with 0.307 and 0.286 mm depth and pitch, respectively, in the upper area and V-shaped threads with 0.405 and 0.808 mm depth and pitch in the rest of the body. These results indicate that micro-thread parameters have a greater effect on stress and strain values.

  15. Dynamic optimization of a FCC converter unit: numerical analysis

    Directory of Open Access Journals (Sweden)

    E. Almeida Nt

    2011-03-01

    Full Text Available Fluidized-bed Catalytic Cracking (FCC is a process subject to frequent variations in the operating conditions (including feed quality and feed rate. The production objectives usually are the maximization of LPG and gasoline production. This fact makes the FCC converter unit an excellent opportunity for real-time optimization. The present work aims to apply a dynamic optimization in an industrial FCC converter unit, using a mechanistic dynamic model, and to carry out a numerical analysis of the solution procedure. A simultaneous approach was used to discretize the system of differential-algebraic equations and the resulting large-scale NLP problem was solved using the IPOPT solver. This study also does a short comparison between the results obtained by a potential dynamic real-time optimization (DRTO against a possible steady-state real-time optimization (RTO application. The results demonstrate that the application of dynamic real-time optimization of a FCC converter unit can bring significant benefits in production.

  16. Optimization analysis of a new vane MRF damper

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, J Q; Feng, Z Z; Jing, Q [Department of Technical Support Engineering, Academy of Armored Force Engineering, Beijing, 100072 (China)], E-mail: zhangjq63@yahoo.com.cn

    2009-02-01

    The primary purpose of this study was to provide the optimization analysis certain characteristics and benefits of a vane MRF damper. Based on the structure of conventional vane hydraulic damper for heavy vehicle, a narrow arc gap between clapboard and rotary vane axle, which one rotates relative to the other, was designed for MRF valve and the mathematical model of damping was deduced. Subsequently, the finite element analysis of electromagnetic circuit was done by ANSYS to perform the optimization process. Some ways were presented to augment the damping adjustable multiple under the condition of keeping initial damping forces and to increase fluid dwell time through the magnetic field. The results show that the method is useful in the design of MR dampers and the damping adjustable range of vane MRF damper can meet the requirement of heavy vehicle semi-active suspension system.

  17. Adjoint-based Mesh Optimization Method: The Development and Application for Nuclear Fuel Analysis

    International Nuclear Information System (INIS)

    Son, Seongmin; Lee, Jeong Ik

    2016-01-01

    In this research, methods for optimizing mesh distribution is proposed. The proposed method uses adjoint base optimization method (adjoint method). The optimized result will be obtained by applying this meshing technique to the existing code input deck and will be compared to the results produced from the uniform meshing method. Numerical solutions are calculated form an in-house 1D Finite Difference Method code while neglecting the axial conduction. The fuel radial node optimization was first performed to match the Fuel Centerline Temperature (FCT) the best. This was followed by optimizing the axial node which the Peak Cladding Temperature (PCT) is matched the best. After obtaining the optimized radial and axial nodes, the nodalization is implemented into the system analysis code and transient analyses were performed to observe the optimum nodalization performance. The developed adjoint-based mesh optimization method in the study is applied to MARS-KS, which is a nuclear system analysis code. Results show that the newly established method yields better results than that of the uniform meshing method from the numerical point of view. It is again stressed that the optimized mesh for the steady state can also give better numerical results even during a transient analysis

  18. Automotive Exterior Noise Optimization Using Grey Relational Analysis Coupled with Principal Component Analysis

    Science.gov (United States)

    Chen, Shuming; Wang, Dengfeng; Liu, Bo

    This paper investigates optimization design of the thickness of the sound package performed on a passenger automobile. The major characteristics indexes for performance selected to evaluate the processes are the SPL of the exterior noise and the weight of the sound package, and the corresponding parameters of the sound package are the thickness of the glass wool with aluminum foil for the first layer, the thickness of the glass fiber for the second layer, and the thickness of the PE foam for the third layer. In this paper, the process is fundamentally with multiple performances, thus, the grey relational analysis that utilizes grey relational grade as performance index is especially employed to determine the optimal combination of the thickness of the different layers for the designed sound package. Additionally, in order to evaluate the weighting values corresponding to various performance characteristics, the principal component analysis is used to show their relative importance properly and objectively. The results of the confirmation experiments uncover that grey relational analysis coupled with principal analysis methods can successfully be applied to find the optimal combination of the thickness for each layer of the sound package material. Therefore, the presented method can be an effective tool to improve the vehicle exterior noise and lower the weight of the sound package. In addition, it will also be helpful for other applications in the automotive industry, such as the First Automobile Works in China, Changan Automobile in China, etc.

  19. Optimization of cooling tower performance analysis using Taguchi method

    Directory of Open Access Journals (Sweden)

    Ramkumar Ramakrishnan

    2013-01-01

    Full Text Available This study discuss the application of Taguchi method in assessing maximum cooling tower effectiveness for the counter flow cooling tower using expanded wire mesh packing. The experiments were planned based on Taguchi’s L27 orthogonal array .The trail was performed under different inlet conditions of flow rate of water, air and water temperature. Signal-to-noise ratio (S/N analysis, analysis of variance (ANOVA and regression were carried out in order to determine the effects of process parameters on cooling tower effectiveness and to identity optimal factor settings. Finally confirmation tests verified this reliability of Taguchi method for optimization of counter flow cooling tower performance with sufficient accuracy.

  20. Sensitivity Analysis of Deviation Source for Fast Assembly Precision Optimization

    Directory of Open Access Journals (Sweden)

    Jianjun Tang

    2014-01-01

    Full Text Available Assembly precision optimization of complex product has a huge benefit in improving the quality of our products. Due to the impact of a variety of deviation source coupling phenomena, the goal of assembly precision optimization is difficult to be confirmed accurately. In order to achieve optimization of assembly precision accurately and rapidly, sensitivity analysis of deviation source is proposed. First, deviation source sensitivity is defined as the ratio of assembly dimension variation and deviation source dimension variation. Second, according to assembly constraint relations, assembly sequences and locating, deviation transmission paths are established by locating the joints between the adjacent parts, and establishing each part’s datum reference frame. Third, assembly multidimensional vector loops are created using deviation transmission paths, and the corresponding scalar equations of each dimension are established. Then, assembly deviation source sensitivity is calculated by using a first-order Taylor expansion and matrix transformation method. Finally, taking assembly precision optimization of wing flap rocker as an example, the effectiveness and efficiency of the deviation source sensitivity analysis method are verified.

  1. An expert system for integrated structural analysis and design optimization for aerospace structures

    Science.gov (United States)

    1992-04-01

    The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and

  2. The Newfoundland School Society (1830-1840): A Critical Discourse Analysis of Its Religious Education Efforts

    Science.gov (United States)

    English, Leona M.

    2012-01-01

    This article uses the lens of critical discourse analysis to examine the religious education efforts of the Newfoundland School Society (NSS), the main provider of religious education in Newfoundland in the 19th century. Although its focus was initially this colony, the NSS quickly broadened its reach to the whole British empire, making it one of…

  3. Optimal analysis of structures by concepts of symmetry and regularity

    CERN Document Server

    Kaveh, Ali

    2013-01-01

    Optimal analysis is defined as an analysis that creates and uses sparse, well-structured and well-conditioned matrices. The focus is on efficient methods for eigensolution of matrices involved in static, dynamic and stability analyses of symmetric and regular structures, or those general structures containing such components. Powerful tools are also developed for configuration processing, which is an important issue in the analysis and design of space structures and finite element models. Different mathematical concepts are combined to make the optimal analysis of structures feasible. Canonical forms from matrix algebra, product graphs from graph theory and symmetry groups from group theory are some of the concepts involved in the variety of efficient methods and algorithms presented. The algorithms elucidated in this book enable analysts to handle large-scale structural systems by lowering their computational cost, thus fulfilling the requirement for faster analysis and design of future complex systems. The ...

  4. Optimization of mobile analysis of radionuclides

    International Nuclear Information System (INIS)

    Labaska, M.

    2016-01-01

    This thesis is focused on optimization of separation and determination of radionuclides which can be used in mobile or field analysis. Mentioned methods are part of procedures and methods of mobile radiometric laboratory which is being developed for Slovak Armed forces. The main principle of these methods is the separation of analytes using high performance liquid chromatography using both reverse phase liquid chromatography and ion exchange chromatography. Chromatography columns such as Dionex IonPack"("R") CS5A, Dionex IonPack"("R") CS3 and Hypersil"("R") BDS C18 have been used. For detection of stabile nuclides, conductivity detection and UV/VIS detection have been employed. Separation of alkali and alkali earth metals. transition metals and lanthanides has been optimized. Combination of chromatographic separation and flow scintillation analysis has been also studied. Radioactive isotopes "5"5Fe, "2"1"0Pb, "6"0Co, "8"5Sr and "1"3"4Cs have been chosen as analytes for nuclear detection techniques. Utilization of well-type and planar NaI(Tl) detector has been investigated together with cloud point extraction. For micelle mediated extraction two possible ligands have been studied - 8-hydroxyquinoline and ammonium pyrolidinedithiocarbamate. Recoveries of cloud point extraction were in range between 80 to 90%. This thesis is also focused on possible application of liquid scintillation analysis with cloud point extraction of analytes. Radioactive standard containing "5"5Fe, "2"1"0Pb, "6"0Co, "8"5Sr and "1"3"4Cs has been separated using liquid chromatography and fractions of individual isotopes have been collected, extracted using cloud point extraction and measured using liquid scintillation analysis. Finally, cloud point extraction coupled with ICP-MS have been studied. (author)

  5. Capital Strategy in Diversification Farming Efforts Using SWOT Analysis

    Science.gov (United States)

    Damanhuri; Setyohadi, D. P. S.; Utami, M. M. D.; Kurnianto, M. F.; Hariono, B.

    2018-01-01

    Wetland farm diversification program in the district of Bojonegoro, Tulungagung, and Ponorogo can not provide an optimal contribution to the income of farmers caused because farmers are not able to cultivate high value-added commodities due to limited capital. This study aims to identify the characteristics of farming, capital pattern, stakeholder role, to analyze farming to know the pattern of planting suggestions and prospects, and to formulate capital facilitation strategy. Farming capital is obtained through loans in financial institutions with different patterns. Small farmers tend to utilize savings and credit cooperatives, microcredit, and loan sharks, while farmers with large wetland holdings tend to utilize commercial banks. P enelitian using descriptive method of farming profit analysis, and SWOT. The government through the banking institutions have provided much facilitation in the form of low-interest loans with flexible payment method. The generic strategy of selected capital facilitation is to empower farmers through farmer groups who have the capability in managing the capital needs of their members.

  6. Evaluation and optimization of footwear comfort parameters using finite element analysis and a discrete optimization algorithm

    Science.gov (United States)

    Papagiannis, P.; Azariadis, P.; Papanikos, P.

    2017-10-01

    Footwear is subject to bending and torsion deformations that affect comfort perception. Following review of Finite Element Analysis studies of sole rigidity and comfort, a three-dimensional, linear multi-material finite element sole model for quasi-static bending and torsion simulation, overcoming boundary and optimisation limitations, is described. Common footwear materials properties and boundary conditions from gait biomechanics are used. The use of normalised strain energy for product benchmarking is demonstrated along with comfort level determination through strain energy density stratification. Sensitivity of strain energy against material thickness is greater for bending than for torsion, with results of both deformations showing positive correlation. Optimization for a targeted performance level and given layer thickness is demonstrated with bending simulations sufficing for overall comfort assessment. An algorithm for comfort optimization w.r.t. bending is presented, based on a discrete approach with thickness values set in line with practical manufacturing accuracy. This work illustrates the potential of the developed finite element analysis applications to offer viable and proven aids to modern footwear sole design assessment and optimization.

  7. Application of sensitivity analysis for optimized piping support design

    International Nuclear Information System (INIS)

    Tai, K.; Nakatogawa, T.; Hisada, T.; Noguchi, H.; Ichihashi, I.; Ogo, H.

    1993-01-01

    The objective of this study was to see if recent developments in non-linear sensitivity analysis could be applied to the design of nuclear piping systems which use non-linear supports and to develop a practical method of designing such piping systems. In the study presented in this paper, the seismic response of a typical piping system was analyzed using a dynamic non-linear FEM and a sensitivity analysis was carried out. Then optimization for the design of the piping system supports was investigated, selecting the support location and yield load of the non-linear supports (bi-linear model) as main design parameters. It was concluded that the optimized design was a matter of combining overall system reliability with the achievement of an efficient damping effect from the non-linear supports. The analysis also demonstrated sensitivity factors are useful in the planning stage of support design. (author)

  8. Optimization Methods in Operations Research and Systems Analysis

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 6. Optimization Methods in Operations Research and Systems Analysis. V G Tikekar. Book Review Volume 2 Issue 6 June 1997 pp 91-92. Fulltext. Click here to view fulltext PDF. Permanent link:

  9. Hybrid PV/diesel solar power system design using multi-level factor analysis optimization

    Science.gov (United States)

    Drake, Joshua P.

    Solar power systems represent a large area of interest across a spectrum of organizations at a global level. It was determined that a clear understanding of current state of the art software and design methods, as well as optimization methods, could be used to improve the design methodology. Solar power design literature was researched for an in depth understanding of solar power system design methods and algorithms. Multiple software packages for the design and optimization of solar power systems were analyzed for a critical understanding of their design workflow. In addition, several methods of optimization were studied, including brute force, Pareto analysis, Monte Carlo, linear and nonlinear programming, and multi-way factor analysis. Factor analysis was selected as the most efficient optimization method for engineering design as it applied to solar power system design. The solar power design algorithms, software work flow analysis, and factor analysis optimization were combined to develop a solar power system design optimization software package called FireDrake. This software was used for the design of multiple solar power systems in conjunction with an energy audit case study performed in seven Tibetan refugee camps located in Mainpat, India. A report of solar system designs for the camps, as well as a proposed schedule for future installations was generated. It was determined that there were several improvements that could be made to the state of the art in modern solar power system design, though the complexity of current applications is significant.

  10. Optimal scheduling using priced timed automata

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Larsen, Kim Guldstrand; Rasmussen, Jacob Illum

    2005-01-01

    This contribution reports on the considerable effort made recently towards extending and applying well-established timed automata technology to optimal scheduling and planning problems. The effort of the authors in this direction has to a large extent been carried out as part of the European...... projects VHS [20] and AMETIST [16] and are available in the recently released UPPAAL CORA [12], a variant of the real-time verification tool UPPAAL [18, 5] specialized for cost-optimal reachability for the extended model of so-called priced timed automata....

  11. Efforts for optimization of BWR core internals replacement

    International Nuclear Information System (INIS)

    Iizuka, N.

    2000-01-01

    The core internal components replacement of a BWR was successfully completed at Fukushima-Daiichi Unit 3 (1F3) of the Tokyo Electric Power Company (TEPCO) in 1998. The core shroud and the majority of the internal components made by type 304 stainless steel (SS) were replaced with the ones made of low carbon type 316L SS to improve Intergranular Stress Corrosion Cracking (IGSCC) resistance. Although this core internals replacement project was completed, several factors combined to result in a longer-than-expected period for the outage. It was partly because the removal work of the internal components was delayed. Learning a lesson from whole experience in this project, some methods were adopted for the next replacement project at Fukushima-Daiichi Unit 2 (1F2) to shorten the outage and reduce the total radiation exposure. Those are new removal processes and new welding machine and so on. The core internals replacement work was ended at 1F2 in 1999, and both the period of outage and the total radiation exposure were the same degree as expected previous to starting of this project. This result shows that the methods adopted in this project are basically applicable for the core internals replacement work and the whole works about the BWR core internals replacement were optimized. The outline of the core internals replacement project and applied technologies at 1F3 and 1F2 are discussed in this paper. (author)

  12. On the relation between flexibility analysis and robust optimization for linear systems

    KAUST Repository

    Zhang, Qi; Grossmann, Ignacio E.; Lima, Ricardo

    2016-01-01

    Flexibility analysis and robust optimization are two approaches to solving optimization problems under uncertainty that share some fundamental concepts, such as the use of polyhedral uncertainty sets and the worst-case approach to guarantee

  13. Cluster analysis by optimal decomposition of induced fuzzy sets

    Energy Technology Data Exchange (ETDEWEB)

    Backer, E

    1978-01-01

    Nonsupervised pattern recognition is addressed and the concept of fuzzy sets is explored in order to provide the investigator (data analyst) additional information supplied by the pattern class membership values apart from the classical pattern class assignments. The basic ideas behind the pattern recognition problem, the clustering problem, and the concept of fuzzy sets in cluster analysis are discussed, and a brief review of the literature of the fuzzy cluster analysis is given. Some mathematical aspects of fuzzy set theory are briefly discussed; in particular, a measure of fuzziness is suggested. The optimization-clustering problem is characterized. Then the fundamental idea behind affinity decomposition is considered. Next, further analysis takes place with respect to the partitioning-characterization functions. The iterative optimization procedure is then addressed. The reclassification function is investigated and convergence properties are examined. Finally, several experiments in support of the method suggested are described. Four object data sets serve as appropriate test cases. 120 references, 70 figures, 11 tables. (RWR)

  14. An optimized color transformation for the analysis of digital images of hematoxylin & eosin stained slides.

    Science.gov (United States)

    Zarella, Mark D; Breen, David E; Plagov, Andrei; Garcia, Fernando U

    2015-01-01

    Hematoxylin and eosin (H&E) staining is ubiquitous in pathology practice and research. As digital pathology has evolved, the reliance of quantitative methods that make use of H&E images has similarly expanded. For example, cell counting and nuclear morphometry rely on the accurate demarcation of nuclei from other structures and each other. One of the major obstacles to quantitative analysis of H&E images is the high degree of variability observed between different samples and different laboratories. In an effort to characterize this variability, as well as to provide a substrate that can potentially mitigate this factor in quantitative image analysis, we developed a technique to project H&E images into an optimized space more appropriate for many image analysis procedures. We used a decision tree-based support vector machine learning algorithm to classify 44 H&E stained whole slide images of resected breast tumors according to the histological structures that are present. This procedure takes an H&E image as an input and produces a classification map of the image that predicts the likelihood of a pixel belonging to any one of a set of user-defined structures (e.g., cytoplasm, stroma). By reducing these maps into their constituent pixels in color space, an optimal reference vector is obtained for each structure, which identifies the color attributes that maximally distinguish one structure from other elements in the image. We show that tissue structures can be identified using this semi-automated technique. By comparing structure centroids across different images, we obtained a quantitative depiction of H&E variability for each structure. This measurement can potentially be utilized in the laboratory to help calibrate daily staining or identify troublesome slides. Moreover, by aligning reference vectors derived from this technique, images can be transformed in a way that standardizes their color properties and makes them more amenable to image processing.

  15. Preliminary analysis on in-core fuel management optimization of molten salt pebble-bed reactor

    International Nuclear Information System (INIS)

    Xia Bing; Jing Xingqing; Xu Xiaolin; Lv Yingzhong

    2013-01-01

    The Nuclear Hot Spring (NHS) is a molten salt pebble-bed reactor featured by full power natural circulation. The unique horizontal coolant flow of the NHS demands the fuel recycling schemes based on radial zoning refueling and the corresponding method of fuel management optimization. The local searching algorithm (LSA) and the simulated annealing algorithm (SAA), the stochastic optimization methods widely used in the refueling optimization problems in LWRs, were applied to the analysis of refueling optimization of the NHS. The analysis results indicate that, compared with the LSA, the SAA can survive the traps of local optimized solutions and reach the global optimized solution, and the quality of optimization of the SAA is independent of the choice of the initial solution. The optimization result gives excellent effects on the in-core power flattening and the suppression of fuel center temperature. For the one-dimensional zoning refueling schemes of the NHS, the SAA is an appropriate optimization method. (authors)

  16. Optimization Analysis Model of Self-Anchored Suspension Bridge

    Directory of Open Access Journals (Sweden)

    Pengzhen Lu

    2014-01-01

    Full Text Available The hangers of self-anchored suspension bridge need to be tensioned suitably during construction. In view of this point, a simplified optimization calculation method of cable force for self-anchored suspension bridge has been developed based on optimization theories, such as minimum bending energy method, and internal force balanced method, influence matrix method. Meanwhile, combined with the weak coherence of main cable and the adjacently interaction of hanger forces, a simplified analysis method is developed using MATLAB, which is then compared with the optimization method that consider the main cable's geometric nonlinearity with software ANSYS in an actual example bridge calculation. This contrast proves the weak coherence of main cable displacement and the limitation of the adjacent cable force influence. Furthermore, a tension program that is of great reference value has been developed; some important conclusions, advices, and attention points have been summarized.

  17. AN OVERVIEW OF THE UNCERTAINTY ANALYSIS, SENSITIVITY ANALYSIS, AND PARAMETER ESTIMATION (UA/SA/PE) API AND HOW TO IMPLEMENT IT

    Science.gov (United States)

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, andParameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...

  18. Self-consistent adjoint analysis for topology optimization of electromagnetic waves

    Science.gov (United States)

    Deng, Yongbo; Korvink, Jan G.

    2018-05-01

    In topology optimization of electromagnetic waves, the Gâteaux differentiability of the conjugate operator to the complex field variable results in the complexity of the adjoint sensitivity, which evolves the original real-valued design variable to be complex during the iterative solution procedure. Therefore, the self-inconsistency of the adjoint sensitivity is presented. To enforce the self-consistency, the real part operator has been used to extract the real part of the sensitivity to keep the real-value property of the design variable. However, this enforced self-consistency can cause the problem that the derived structural topology has unreasonable dependence on the phase of the incident wave. To solve this problem, this article focuses on the self-consistent adjoint analysis of the topology optimization problems for electromagnetic waves. This self-consistent adjoint analysis is implemented by splitting the complex variables of the wave equations into the corresponding real parts and imaginary parts, sequentially substituting the split complex variables into the wave equations with deriving the coupled equations equivalent to the original wave equations, where the infinite free space is truncated by the perfectly matched layers. Then, the topology optimization problems of electromagnetic waves are transformed into the forms defined on real functional spaces instead of complex functional spaces; the adjoint analysis of the topology optimization problems is implemented on real functional spaces with removing the variational of the conjugate operator; the self-consistent adjoint sensitivity is derived, and the phase-dependence problem is avoided for the derived structural topology. Several numerical examples are implemented to demonstrate the robustness of the derived self-consistent adjoint analysis.

  19. Optimization of cryogenic cooled EDM process parameters using grey relational analysis

    International Nuclear Information System (INIS)

    Kumar, S Vinoth; Kumar, M Pradeep

    2014-01-01

    This paper presents an experimental investigation on cryogenic cooling of liquid nitrogen (LN 2 ) copper electrode in the electrical discharge machining (EDM) process. The optimization of the EDM process parameters, such as the electrode environment (conventional electrode and cryogenically cooled electrode in EDM), discharge current, pulse on time, gap voltage on material removal rate, electrode wear, and surface roughness on machining of AlSiCp metal matrix composite using multiple performance characteristics on grey relational analysis was investigated. The L 18 orthogonal array was utilized to examine the process parameters, and the optimal levels of the process parameters were identified through grey relational analysis. Experimental data were analyzed through analysis of variance. Scanning electron microscopy analysis was conducted to study the characteristics of the machined surface.

  20. Sensitivity analysis in optimization and reliability problems

    International Nuclear Information System (INIS)

    Castillo, Enrique; Minguez, Roberto; Castillo, Carmen

    2008-01-01

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods

  1. Sensitivity analysis in optimization and reliability problems

    Energy Technology Data Exchange (ETDEWEB)

    Castillo, Enrique [Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. Castros s/n., 39005 Santander (Spain)], E-mail: castie@unican.es; Minguez, Roberto [Department of Applied Mathematics, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: roberto.minguez@uclm.es; Castillo, Carmen [Department of Civil Engineering, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: mariacarmen.castillo@uclm.es

    2008-12-15

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods.

  2. Optimization benefits analysis in production process of fabrication components

    Science.gov (United States)

    Prasetyani, R.; Rafsanjani, A. Y.; Rimantho, D.

    2017-12-01

    The determination of an optimal number of product combinations is important. The main problem at part and service department in PT. United Tractors Pandu Engineering (shortened to PT.UTPE) Is the optimization of the combination of fabrication component products (known as Liner Plate) which influence to the profit that will be obtained by the company. Liner Plate is a fabrication component that serves as a protector of core structure for heavy duty attachment, such as HD Vessel, HD Bucket, HD Shovel, and HD Blade. The graph of liner plate sales from January to December 2016 has fluctuated and there is no direct conclusion about the optimization of production of such fabrication components. The optimal product combination can be achieved by calculating and plotting the amount of production output and input appropriately. The method that used in this study is linear programming methods with primal, dual, and sensitivity analysis using QM software for Windows to obtain optimal fabrication components. In the optimal combination of components, PT. UTPE provide the profit increase of Rp. 105,285,000.00 for a total of Rp. 3,046,525,000.00 per month and the production of a total combination of 71 units per unit variance per month.

  3. A Study on the Analysis and Optimal Control of Nonlinear Systems via Walsh Function

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Tae; Kim, Tai Hoon; Ahn, Doo Soo [Sungkyunkwan University (Korea); Lee, Myung Kyu [Kyungsung University (Korea)

    2000-07-01

    This paper presents the new adaptive optimal scheme for the nonlinear systems, which is based on the Picard's iterative approximation and fast Walsh transform. It is well known that the Walsh function approach method is very difficult to apply for the analysis and optimal control of nonlinear systems. However, these problems can be easily solved by the improvement of the previous adaptive optimal scheme. The proposes method is easily applicable to the analysis and optimal control of nonlinear systems. (author). 15 refs., 6 figs., 1 tab.

  4. Workshop on Computational Optimization

    CERN Document Server

    2015-01-01

    Our everyday life is unthinkable without optimization. We try to minimize our effort and to maximize the achieved profit. Many real world and industrial problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks. This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2013. It presents recent advances in computational optimization. The volume includes important real life problems like parameter settings for controlling processes in bioreactor, resource constrained project scheduling, problems arising in transport services, error correcting codes, optimal system performance and energy consumption and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others.

  5. Exergoeconomic multi objective optimization and sensitivity analysis of a regenerative Brayton cycle

    International Nuclear Information System (INIS)

    Naserian, Mohammad Mahdi; Farahat, Said; Sarhaddi, Faramarz

    2016-01-01

    Highlights: • Finite time exergoeconomic multi objective optimization of a Brayton cycle. • Comparing the exergoeconomic and the ecological function optimization results. • Inserting the cost of fluid streams concept into finite-time thermodynamics. • Exergoeconomic sensitivity analysis of a regenerative Brayton cycle. • Suggesting the cycle performance curve drawing and utilization. - Abstract: In this study, the optimal performance of a regenerative Brayton cycle is sought through power maximization and then exergoeconomic optimization using finite-time thermodynamic concept and finite-size components. Optimizations are performed using genetic algorithm. In order to take into account the finite-time and finite-size concepts in current problem, a dimensionless mass-flow parameter is used deploying time variations. The decision variables for the optimum state (of multi objective exergoeconomic optimization) are compared to the maximum power state. One can see that the multi objective exergoeconomic optimization results in a better performance than that obtained with the maximum power state. The results demonstrate that system performance at optimum point of multi objective optimization yields 71% of the maximum power, but only with exergy destruction as 24% of the amount that is produced at the maximum power state and 67% lower total cost rate than that of the maximum power state. In order to assess the impact of the variation of the decision variables on the objective functions, sensitivity analysis is conducted. Finally, the cycle performance curve drawing according to exergoeconomic multi objective optimization results and its utilization, are suggested.

  6. Optimal long-term contracting with learning

    OpenAIRE

    He, Zhiguo; Wei, Bin; Yu, Jianfeng; Gao, Feng

    2016-01-01

    We introduce uncertainty into Holmstrom and Milgrom (1987) to study optimal long-term contracting with learning. In a dynamic relationship, the agent's shirking not only reduces current performance but also increases the agent's information rent due to the persistent belief manipulation effect. We characterize the optimal contract using the dynamic programming technique in which information rent is the unique state variable. In the optimal contract, the optimal effort is front-loaded and decr...

  7. Stress-strain state analysis and optimization of rod system under periodic pulse load

    Directory of Open Access Journals (Sweden)

    Grebenyuk Grigory

    2018-01-01

    Full Text Available The paper considers the problem of analysis and optimization of rod systems subjected to combined static and periodic pulse load. As a result of the study the analysis method was developed based on traditional approach to solving homogeneous matrix equations of state and a special algorithm for developing a particular solution. The influence of pulse parameters variations on stress-strain state of a rod system was analyzed. Algorithms for rod systems optimization were developed basing on strength recalculation and statement and solution of optimization problem as a problem of nonlinear mathematical programming. Recommendations are developed for efficient organization of process for optimization of rod systems under static and periodic pulse load.

  8. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  9. Optimal Design of Gravity Pipeline Systems Using Genetic Algorithm and Mathematical Optimization

    Directory of Open Access Journals (Sweden)

    maryam rohani

    2015-03-01

    Full Text Available In recent years, the optimal design of pipeline systems has become increasingly important in the water industry. In this study, the two methods of genetic algorithm and mathematical optimization were employed for the optimal design of pipeline systems with the objective of avoiding the water hammer effect caused by valve closure. The problem of optimal design of a pipeline system is a constrained one which should be converted to an unconstrained optimization problem using an external penalty function approach in the mathematical programming method. The quality of the optimal solution greatly depends on the value of the penalty factor that is calculated by the iterative method during the optimization procedure such that the computational effort is simultaneously minimized. The results obtained were used to compare the GA and mathematical optimization methods employed to determine their efficiency and capabilities for the problem under consideration. It was found that the mathematical optimization method exhibited a slightly better performance compared to the GA method.

  10. Generalized concavity in fuzzy optimization and decision analysis

    CERN Document Server

    Ramík, Jaroslav

    2002-01-01

    Convexity of sets in linear spaces, and concavity and convexity of functions, lie at the root of beautiful theoretical results that are at the same time extremely useful in the analysis and solution of optimization problems, including problems of either single objective or multiple objectives. Not all of these results rely necessarily on convexity and concavity; some of the results can guarantee that each local optimum is also a global optimum, giving these methods broader application to a wider class of problems. Hence, the focus of the first part of the book is concerned with several types of generalized convex sets and generalized concave functions. In addition to their applicability to nonconvex optimization, these convex sets and generalized concave functions are used in the book's second part, where decision-making and optimization problems under uncertainty are investigated. Uncertainty in the problem data often cannot be avoided when dealing with practical problems. Errors occur in real-world data for...

  11. Thermodynamic analysis and optimization of an irreversible Ericsson cryogenic refrigerator cycle

    International Nuclear Information System (INIS)

    Ahmadi, Mohammad Hossein; Ahmadi, Mohammad Ali

    2015-01-01

    Highlights: • Thermodynamic modeling of Ericsson refrigeration is performed. • The latter is achieved using NSGA algorithm and thermodynamic analysis. • Different decision makers are utilized to determine optimum values of outcomes. - Abstract: Optimum ecological and thermal performance assessments of an Ericsson cryogenic refrigerator system are investigated in different optimization settings. To evaluate this goal, ecological and thermal approaches are proposed for the Ericsson cryogenic refrigerator, and three objective functions (input power, coefficient of performance and ecological objective function) are gained for the suggested system. Throughout the current research, an evolutionary algorithm (EA) and thermodynamic analysis are employed to specify optimum values of the input power, coefficient of performance and ecological objective function of an Ericsson cryogenic refrigerator system. Four setups are assessed for optimization of the Ericsson cryogenic refrigerator. Throughout the three scenarios, a conventional single-objective optimization has been utilized distinctly with each objective function, nonetheless of other objectives. Throughout the last setting, input power, coefficient of performance and ecological function objectives are optimized concurrently employing a non-dominated sorting genetic algorithm (GA) named the non-dominated sorting genetic algorithm (NSGA-II). As in multi-objective optimization, an assortment of optimum results named the Pareto optimum frontiers are gained rather than a single ultimate optimum result gained via conventional single-objective optimization. Thus, a process of decision making has been utilized for choosing an ultimate optimum result. Well-known decision-makers have been performed to specify optimized outcomes from the Pareto optimum results in the space of objectives. The outcomes gained from aforementioned optimization setups are discussed and compared employing an index of deviation presented in this

  12. A new approach for optimization of thermal power plant based on the exergoeconomic analysis and structural optimization method: Application to the CGAM problem

    International Nuclear Information System (INIS)

    Seyyedi, Seyyed Masoud; Ajam, Hossein; Farahat, Said

    2010-01-01

    In large thermal systems, which have many design variables, conventional mathematical optimization methods are not efficient. Thus, exergoeconomic analysis can be used to assist optimization in these systems. In this paper a new iterative approach for optimization of large thermal systems is suggested. The proposed methodology uses exergoeconomic analysis, sensitivity analysis, and structural optimization method which are applied to determine sum of the investment and exergy destruction cost flow rates for each component, the importance of each decision variable and minimization of the total cost flow rate, respectively. Applicability to the large real complex thermal systems and rapid convergency are characteristics of this new iterative methodology. The proposed methodology is applied to the benchmark CGAM cogeneration system to show how it minimizes the total cost flow rate of operation for the installation. Results are compared with original CGAM problem.

  13. Two-dimensional radiation shielding optimization analysis of spent fuel transport container

    International Nuclear Information System (INIS)

    Tian Yingnan; Chen Yixue; Yang Shouhai

    2013-01-01

    The intelligent radiation shielding optimization design software platform is a one-dimensional multi-target radiation shielding optimization program which is developed on the basis of the genetic algorithm program and one-dimensional discrete ordinate program-ANISN. This program was applied in the optimization design analysis of the spent fuel transport container radiation shielding. The multi-objective optimization calculation model of the spent fuel transport container radiation shielding was established, and the optimization calculation of the spent fuel transport container weight and radiation dose rate was carried by this program. The calculation results were checked by Monte-Carlo program-MCNP/4C. The results show that the weight of the optimized spent fuel transport container decreases to 81.1% of the origin and the radiation dose rate decreases to below 65.4% of the origin. The maximum deviation between the calculated values from the program and the MCNP is below 5%. The results show that the optimization design scheme is feasible and the calculation result is correct. (authors)

  14. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

    Directory of Open Access Journals (Sweden)

    Rupert Faltermeier

    2015-01-01

    Full Text Available Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP and intracranial pressure (ICP. Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP, with the outcome of the patients represented by the Glasgow Outcome Scale (GOS. For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  15. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology.

    Science.gov (United States)

    Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  16. A "Reverse-Schur" Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design.

    Science.gov (United States)

    Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.

  17. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    Science.gov (United States)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  18. The optimal XFEM approximation for fracture analysis

    International Nuclear Information System (INIS)

    Jiang Shouyan; Du Chengbin; Ying Zongquan

    2010-01-01

    The extended finite element method (XFEM) provides an effective tool for analyzing fracture mechanics problems. A XFEM approximation consists of standard finite elements which are used in the major part of the domain and enriched elements in the enriched sub-domain for capturing special solution properties such as discontinuities and singularities. However, two issues in the standard XFEM should specially be concerned: efficient numerical integration methods and an appropriate construction of the blending elements. In the paper, an optimal XFEM approximation is proposed to overcome the disadvantage mentioned above in the standard XFEM. The modified enrichment functions are presented that can reproduced exactly everywhere in the domain. The corresponding FORTRAN program is developed for fracture analysis. A classic problem of fracture mechanics is used to benchmark the program. The results indicate that the optimal XFEM can alleviate the errors and improve numerical precision.

  19. Hydraulic analysis and optimization design in Guri rehabilitation project

    Science.gov (United States)

    Cheng, H.; Zhou, L. J.; Gong, L.; Wang, Z. N.; Wen, Q.; Zhao, Y. Z.; Wang, Y. L.

    2016-11-01

    Recently Dongfang was awarded the contract for rehabilitation of 6 units in Guri power plant, the biggest hydro power project in Venezuela. The rehabilitation includes, but not limited to, the extension of output capacity by about 50% and enhancement of efficiency level. To achieve the targets the runner and the guide vanes will be replaced by the newly optimized designs. In addition, the out-of-date stay vanes with straight plate shape will be modified into proper profiles after considering the application feasibility in field. The runner and vane profiles were optimized by using state-of-the-art flow simulation techniques. And the hydraulic performances were confirmed by the following model tests. This paper describes the flow analysis during the optimization procedure and the comparison between various technical concepts.

  20. A comparative research of different ensemble surrogate models based on set pair analysis for the DNAPL-contaminated aquifer remediation strategy optimization

    Science.gov (United States)

    Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin

    2017-08-01

    Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously.

  1. Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning

    Energy Technology Data Exchange (ETDEWEB)

    Yi, Qing [Univ. of Colorado, Colorado Springs, CO (United States); Whaley, Richard Clint [Univ. of Texas, San Antonio, TX (United States); Qasem, Apan [Texas State Univ., San Marcos, TX (United States); Quinlan, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-11-23

    This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis, identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.

  2. Scientific visualization of 3-dimensional optimized stellarator configurations

    International Nuclear Information System (INIS)

    Spong, D.A.

    1998-01-01

    The design techniques and physics analysis of modern stellarator configurations for magnetic fusion research rely heavily on high performance computing and simulation. Stellarators, which are fundamentally 3-dimensional in nature, offer significantly more design flexibility than more symmetric devices such as the tokamak. By varying the outer boundary shape of the plasma, a variety of physics features, such as transport, stability, and heating efficiency can be optimized. Scientific visualization techniques are an important adjunct to this effort as they provide a necessary ergonomic link between the numerical results and the intuition of the human researcher. The authors have developed a variety of visualization techniques for stellarators which both facilitate the design optimization process and allow the physics simulations to be more readily understood

  3. Stochastic analysis and robust optimization for a deck lid inner panel stamping

    International Nuclear Information System (INIS)

    Hou, Bo; Wang, Wurong; Li, Shuhui; Lin, Zhongqin; Xia, Z. Cedric

    2010-01-01

    FE-simulation and optimization are widely used in the stamping process to improve design quality and shorten development cycle. However, the current simulation and optimization may lead to non-robust results due to not considering the variation of material and process parameters. In this study, a novel stochastic analysis and robust optimization approach is proposed to improve the stamping robustness, where the uncertainties are involved to reflect manufacturing reality. A meta-model based stochastic analysis method is developed, where FE-simulation, uniform design and response surface methodology (RSM) are used to construct meta-model, based on which Monte-Carlo simulation is performed to predict the influence of input parameters variation on the final product quality. By applying the stochastic analysis, uniform design and RSM, the mean and the standard deviation (SD) of product quality are calculated as functions of the controllable process parameters. The robust optimization model composed of mean and SD is constructed and solved, the result of which is compared with the deterministic one to show its advantages. It is demonstrated that the product quality variations are reduced significantly, and quality targets (reject rate) are achieved under the robust optimal solution. The developed approach offers rapid and reliable results for engineers to deal with potential stamping problems during the early phase of product and tooling design, saving more time and resources.

  4. Uncertainty analysis and design optimization of hybrid rocket motor powered vehicle for suborbital flight

    Directory of Open Access Journals (Sweden)

    Zhu Hao

    2015-06-01

    Full Text Available In this paper, we propose an uncertainty analysis and design optimization method and its applications on a hybrid rocket motor (HRM powered vehicle. The multidisciplinary design model of the rocket system is established and the design uncertainties are quantified. The sensitivity analysis of the uncertainties shows that the uncertainty generated from the error of fuel regression rate model has the most significant effect on the system performances. Then the differences between deterministic design optimization (DDO and uncertainty-based design optimization (UDO are discussed. Two newly formed uncertainty analysis methods, including the Kriging-based Monte Carlo simulation (KMCS and Kriging-based Taylor series approximation (KTSA, are carried out using a global approximation Kriging modeling method. Based on the system design model and the results of design uncertainty analysis, the design optimization of an HRM powered vehicle for suborbital flight is implemented using three design optimization methods: DDO, KMCS and KTSA. The comparisons indicate that the two UDO methods can enhance the design reliability and robustness. The researches and methods proposed in this paper can provide a better way for the general design of HRM powered vehicles.

  5. Nonlinearity Analysis and Parameters Optimization for an Inductive Angle Sensor

    Directory of Open Access Journals (Sweden)

    Lin Ye

    2014-02-01

    Full Text Available Using the finite element method (FEM and particle swarm optimization (PSO, a nonlinearity analysis based on parameter optimization is proposed to design an inductive angle sensor. Due to the structure complexity of the sensor, understanding the influences of structure parameters on the nonlinearity errors is a critical step in designing an effective sensor. Key parameters are selected for the design based on the parameters’ effects on the nonlinearity errors. The finite element method and particle swarm optimization are combined for the sensor design to get the minimal nonlinearity error. In the simulation, the nonlinearity error of the optimized sensor is 0.053% in the angle range from −60° to 60°. A prototype sensor is manufactured and measured experimentally, and the experimental nonlinearity error is 0.081% in the angle range from −60° to 60°.

  6. Optimal fuel inventory strategies

    International Nuclear Information System (INIS)

    Caspary, P.J.; Hollibaugh, J.B.; Licklider, P.L.; Patel, K.P.

    1990-01-01

    In an effort to maintain their competitive edge, most utilities are reevaluating many of their conventional practices and policies in an effort to further minimize customer revenue requirements without sacrificing system reliability. Over the past several years, Illinois Power has been rethinking its traditional fuel inventory strategies, recognizing that coal supplies are competitive and plentiful and that carrying charges on inventory are expensive. To help the Company achieve one of its strategic corporate goals, an optimal fuel inventory study was performed for its five major coal-fired generating stations. The purpose of this paper is to briefly describe Illinois Power's system and past practices concerning coal inventories, highlight the analytical process behind the optimal fuel inventory study, and discuss some of the recent experiences affecting coal deliveries and economic dispatch

  7. Lattice Boltzmann Simulation Optimization on Leading Multicore Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2008-02-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara2, STI Cell, as well as the single core Intel Itanium2. Rather than hand-tuning LBMHD for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 14x improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.

  8. Lattice Boltzmann simulation optimization on leading multicore platforms

    Energy Technology Data Exchange (ETDEWEB)

    Williams, S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Carter, J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oliker, L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shalf, J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yelick, K. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States)

    2008-01-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of searchbased performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara2, STI Cell, as well as the single core Intel Itanium2. Rather than hand-tuning LBMHD for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our autotuned LBMHD application achieves up to a 14 improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.

  9. Blade profile optimization of kaplan turbine using cfd analysis

    International Nuclear Information System (INIS)

    Janjua, A.B.; Khalil, M.S.

    2013-01-01

    Utilization of hydro-power as renewable energy source is of prime importance in the world now. Hydropower energy is available in abundant in form of falls, canals rivers, dams etc. It means, there are various types of sites with different parameters like flow rate, heads, etc. Depending upon the sites, water turbines are designed and manufactured to avail hydro-power energy. Low head turbines on runof-river are widely used for the purpose. Low head turbines are classified as reaction turbines. For runof-river, depending upon the variety of site data, low head Kaplan turbines are selected, designed and manufactured. For any given site requirement, it becomes very essential to design the turbine runner blades through optimization of the CAD model of blades profile. This paper presents the optimization technique carried out on a complex geometry of blade profile through static and dynamic computational analysis. It is used through change of the blade profile geometry at five different angles in the 3D (Three Dimensional) CAD model. Blade complex geometry and design have been developed by using the coordinates point system on the blade in PRO-E /CREO software. Five different blade models are developed for analysis purpose. Based on the flow rate and heads, blade profiles are analyzed using ANSYS software to check and compare the output results for optimization of the blades for improved results which show that by changing blade profile angle and its geometry, different blade sizes and geometry can be optimized using the computational techniques with changes in CAD models. (author)

  10. Blade Profile Optimization of Kaplan Turbine Using CFD Analysis

    Directory of Open Access Journals (Sweden)

    Aijaz Bashir Janjua

    2013-10-01

    Full Text Available Utilization of hydro-power as renewable energy source is of prime importance in the world now. Hydropower energy is available in abundant in form of falls, canals rivers, dams etc. It means, there are various types of sites with different parameters like flow rate, heads, etc. Depending upon the sites, water turbines are designed and manufactured to avail hydro-power energy. Low head turbines on runof-river are widely used for the purpose. Low head turbines are classified as reaction turbines. For runof river, depending upon the variety of site data, low head Kaplan turbines are selected, designed and manufactured. For any given site requirement, it becomes very essential to design the turbine runner blades through optimization of the CAD model of blades profile. This paper presents the optimization technique carried out on a complex geometry of blade profile through static and dynamic computational analysis. It is used through change of the blade profile geometry at five different angles in the 3D (Three Dimensional CAD model. Blade complex geometry and design have been developed by using the coordinates point system on the blade in PRO-E /CREO software. Five different blade models are developed for analysis purpose. Based on the flow rate and heads, blade profiles are analyzed using ANSYS software to check and compare the output results for optimization of the blades for improved results which show that by changing blade profile angle and its geometry, different blade sizes and geometry can be optimized using the computational techniques with changes in CAD models.

  11. Analysis and optimization of hybrid electric vehicle thermal management systems

    Science.gov (United States)

    Hamut, H. S.; Dincer, I.; Naterer, G. F.

    2014-02-01

    In this study, the thermal management system of a hybrid electric vehicle is optimized using single and multi-objective evolutionary algorithms in order to maximize the exergy efficiency and minimize the cost and environmental impact of the system. The objective functions are defined and decision variables, along with their respective system constraints, are selected for the analysis. In the multi-objective optimization, a Pareto frontier is obtained and a single desirable optimal solution is selected based on LINMAP decision-making process. The corresponding solutions are compared against the exergetic, exergoeconomic and exergoenvironmental single objective optimization results. The results show that the exergy efficiency, total cost rate and environmental impact rate for the baseline system are determined to be 0.29, ¢28 h-1 and 77.3 mPts h-1 respectively. Moreover, based on the exergoeconomic optimization, 14% higher exergy efficiency and 5% lower cost can be achieved, compared to baseline parameters at an expense of a 14% increase in the environmental impact. Based on the exergoenvironmental optimization, a 13% higher exergy efficiency and 5% lower environmental impact can be achieved at the expense of a 27% increase in the total cost.

  12. Risk-based optimization of pipe inspections in large underground networks with imprecise information

    International Nuclear Information System (INIS)

    Mancuso, A.; Compare, M.; Salo, A.; Zio, E.; Laakso, T.

    2016-01-01

    In this paper, we present a novel risk-based methodology for optimizing the inspections of large underground infrastructure networks in the presence of incomplete information about the network features and parameters. The methodology employs Multi Attribute Value Theory to assess the risk of each pipe in the network, whereafter the optimal inspection campaign is built with Portfolio Decision Analysis (PDA). Specifically, Robust Portfolio Modeling (RPM) is employed to identify Pareto-optimal portfolios of pipe inspections. The proposed methodology is illustrated by reporting a real case study on the large-scale maintenance optimization of the sewerage network in Espoo, Finland. - Highlights: • Risk-based approach to optimize pipe inspections on large underground networks. • Reasonable computational effort to select efficient inspection portfolios. • Possibility to accommodate imprecise expert information. • Feasibility of the approach shown by Espoo water system case study.

  13. Replica Analysis for Portfolio Optimization with Single-Factor Model

    Science.gov (United States)

    Shinzato, Takashi

    2017-06-01

    In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.

  14. Aerodynamic shape optimization using preconditioned conjugate gradient methods

    Science.gov (United States)

    Burgreen, Greg W.; Baysal, Oktay

    1993-01-01

    In an effort to further improve upon the latest advancements made in aerodynamic shape optimization procedures, a systematic study is performed to examine several current solution methodologies as applied to various aspects of the optimization procedure. It is demonstrated that preconditioned conjugate gradient-like methodologies dramatically decrease the computational efforts required for such procedures. The design problem investigated is the shape optimization of the upper and lower surfaces of an initially symmetric (NACA-012) airfoil in inviscid transonic flow and at zero degree angle-of-attack. The complete surface shape is represented using a Bezier-Bernstein polynomial. The present optimization method then automatically obtains supercritical airfoil shapes over a variety of freestream Mach numbers. Furthermore, the best optimization strategy examined resulted in a factor of 8 decrease in computational time as well as a factor of 4 decrease in memory over the most efficient strategies in current use.

  15. Reliability analysis of large scaled structures by optimization technique

    International Nuclear Information System (INIS)

    Ishikawa, N.; Mihara, T.; Iizuka, M.

    1987-01-01

    This paper presents a reliability analysis based on the optimization technique using PNET (Probabilistic Network Evaluation Technique) method for the highly redundant structures having a large number of collapse modes. This approach makes the best use of the merit of the optimization technique in which the idea of PNET method is used. The analytical process involves the minimization of safety index of the representative mode, subjected to satisfaction of the mechanism condition and of the positive external work. The procedure entails the sequential performance of a series of the NLP (Nonlinear Programming) problems, where the correlation condition as the idea of PNET method pertaining to the representative mode is taken as an additional constraint to the next analysis. Upon succeeding iterations, the final analysis is achieved when a collapse probability at the subsequent mode is extremely less than the value at the 1st mode. The approximate collapse probability of the structure is defined as the sum of the collapse probabilities of the representative modes classified by the extent of correlation. Then, in order to confirm the validity of the proposed method, the conventional Monte Carlo simulation is also revised by using the collapse load analysis. Finally, two fairly large structures were analyzed to illustrate the scope and application of the approach. (orig./HP)

  16. Analysis of modal frequency optimization of railway vehicle car body

    Directory of Open Access Journals (Sweden)

    Wenjing Sun

    2016-04-01

    Full Text Available High structural modal frequencies of car body are beneficial as they ensure better vibration control and enhance ride quality of railway vehicles. Modal sensitivity optimization and elastic suspension parameters used in the design of equipment beneath the chassis of the car body are proposed in order to improve the modal frequencies of car bodies under service conditions. Modal sensitivity optimization is based on sensitivity analysis theory which considers the thickness of the body frame at various positions as variables in order to achieve optimization. Equipment suspension design analyzes the influence of suspension parameters on the modal frequencies of the car body through the use of an equipment-car body coupled model. Results indicate that both methods can effectively improve the modal parameters of the car body. Modal sensitivity optimization increases vertical bending frequency from 9.70 to 10.60 Hz, while optimization of elastic suspension parameters increases the vertical bending frequency to 10.51 Hz. The suspension design can be used without alteration to the structure of the car body while ensuring better ride quality.

  17. Optimal choice of basis functions in the linear regression analysis

    International Nuclear Information System (INIS)

    Khotinskij, A.M.

    1988-01-01

    Problem of optimal choice of basis functions in the linear regression analysis is investigated. Step algorithm with estimation of its efficiency, which holds true at finite number of measurements, is suggested. Conditions, providing the probability of correct choice close to 1 are formulated. Application of the step algorithm to analysis of decay curves is substantiated. 8 refs

  18. Optimization of a flow injection analysis system for multiple solvent extraction

    International Nuclear Information System (INIS)

    Rossi, T.M.; Shelly, D.C.; Warner, I.M.

    1982-01-01

    The performance of a multistage flow injection analysis solvent extraction system has been optimized. The effect of solvent segmentation devices, extraction coils, and phase separators on performance characteristics is discussed. Theoretical consideration is given to the effects and determination of dispersion and the extraction dynamics within both glass and Teflon extraction coils. The optimized system has a sample recovery similar to an identical manual procedure and a 1.5% relative standard deviation between injections. Sample throughput time is under 5 min. These characteristics represent significant improvements over the performance of the same system before optimization. 6 figures, 2 tables

  19. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia (Sandia National Laboratories, Livermore, CA); Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Eddy, John P.

    2011-12-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.

  20. Shape optimization of the stokes flow problem based on isogeometric analysis

    DEFF Research Database (Denmark)

    Park, Byong-Ug; Seo, Yu-Deok; Sigmund, Ole

    2013-01-01

    Design-dependent loads related to boundary shape, such as pressure and convection loads, have been a challenging issue in optimization. Isogeometric analysis, where the analysis model has smooth boundaries described by spline functions can handle design-dependent loads with ease. In the present s...

  1. An optimized method for automated analysis of algal pigments by HPLC

    NARCIS (Netherlands)

    van Leeuwe, M. A.; Villerius, L. A.; Roggeveld, J.; Visser, R. J. W.; Stefels, J.

    2006-01-01

    A recent development in algal pigment analysis by high-performance liquid chromatography (HPLC) is the application of automation. An optimization of a complete sampling and analysis protocol applied specifically in automation has not yet been performed. In this paper we show that automation can only

  2. Multi-component controllers in reactor physics optimality analysis

    International Nuclear Information System (INIS)

    Aldemir, T.

    1978-01-01

    An algorithm is developed for the optimality analysis of thermal reactor assemblies with multi-component control vectors. The neutronics of the system under consideration is assumed to be described by the two-group diffusion equations and constraints are imposed upon the state and control variables. It is shown that if the problem is such that the differential and algebraic equations describing the system can be cast into a linear form via a change of variables, the optimal control components are piecewise constant functions and the global optimal controller can be determined by investigating the properties of the influence functions. Two specific problems are solved utilizing this approach. A thermal reactor consisting of fuel, burnable poison and moderator is found to yield maximal power when the assembly consists of two poison zones and the power density is constant throughout the assembly. It is shown that certain variational relations have to be considered to maintain the activeness of the system equations as differential constraints. The problem of determining the maximum initial breeding ratio for a thermal reactor is solved by treating the fertile and fissile material absorption densities as controllers. The optimal core configurations are found to consist of three fuel zones for a bare assembly and two fuel zones for a reflected assembly. The optimum fissile material density is determined to be inversely proportional to the thermal flux

  3. Modern optimization with R

    CERN Document Server

    Cortez, Paulo

    2014-01-01

    The goal of this book is to gather in a single document the most relevant concepts related to modern optimization methods, showing how such concepts and methods can be addressed using the open source, multi-platform R tool. Modern optimization methods, also known as metaheuristics, are particularly useful for solving complex problems for which no specialized optimization algorithm has been developed. These methods often yield high quality solutions with a more reasonable use of computational resources (e.g. memory and processing effort). Examples of popular modern methods discussed in this book are: simulated annealing; tabu search; genetic algorithms; differential evolution; and particle swarm optimization. This book is suitable for undergraduate and graduate students in Computer Science, Information Technology, and related areas, as well as data analysts interested in exploring modern optimization methods using R.

  4. Topology-oblivious optimization of MPI broadcast algorithms on extreme-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2015-11-01

    © 2015 Elsevier B.V. All rights reserved. Significant research has been conducted in collective communication operations, in particular in MPI broadcast, on distributed memory platforms. Most of the research efforts aim to optimize the collective operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a simple but general approach to optimization of the legacy MPI broadcast algorithms, which are widely used in MPICH and Open MPI. The proposed optimization technique is designed to address the challenge of extreme scale of future HPC platforms. It is based on hierarchical transformation of the traditionally flat logical arrangement of communicating processors. Theoretical analysis and experimental results on IBM BlueGene/P and a cluster of the Grid\\'5000 platform are presented.

  5. Anisotropic piezoelectric twist actuation of helicopter rotor blades: Aeroelastic analysis and design optimization

    Science.gov (United States)

    Wilkie, William Keats

    1997-12-01

    . Determining the optimum tradeoff between blade torsional stiffness and piezoelectric twist actuation authority is the subject of the third study. For this investigation, a linearized hovering-flight eigenvalue analysis is developed. Linear optimal control theory is then utilized to develop an optimum active twist blade design in terms of reducing structural energy and control effort cost. The forward flight vibratory loads characteristics of the torsional stiffness optimized active twist blade are then examined using the nonlinear, forward flight aeroelastic analysis. The optimized active twist rotor blade is shown to have improved passive and active vibratory loads characteristics relative to the baseline active twist blades.

  6. Multi-objective optimization of GPU3 Stirling engine using third order analysis

    International Nuclear Information System (INIS)

    Toghyani, Somayeh; Kasaeian, Alibakhsh; Hashemabadi, Seyyed Hasan; Salimi, Morteza

    2014-01-01

    Highlights: • A third-order analysis is carried out for optimization of Stirling engine. • The triple-optimization is done on a GPU3 Stirling engine. • A multi-objective optimization is carried out for a Stirling engine. • The results are compared with an experimental previous work for checking the model improvement. • The methods of TOPSIS, Fuzzy, and LINMAP are compared with each other in aspect of optimization. - Abstract: Stirling engine is an external combustion engine that uses any external heat source to generate mechanical power which operates at closed cycles. These engines are good choices for using in power generation systems; because these engines present a reasonable theoretical efficiency which can be closer to the Carnot efficiency, comparing with other reciprocating thermal engines. Hence, many studies have been conducted on Stirling engines and the third order thermodynamic analysis is one of them. In this study, multi-objective optimization with four decision variables including the temperature of heat source, stroke, mean effective pressure, and the engine frequency were applied in order to increase the efficiency and output power and reduce the pressure drop. Three decision-making procedures were applied to optimize the answers from the results. At last, the applied methods were compared with the results obtained of one experimental work and a good agreement was observed

  7. 两层供应链系统最优广告努力水平与直接价格折扣的博弈分析%A Game Analysis of Optimal Advertising Efforts and Direct Price Discount Strategy for the Two-level Supply Chain

    Institute of Scientific and Technical Information of China (English)

    何丽红; 廖茜; 刘蒙蒙; 苑春

    2017-01-01

    在两层供应链中,考虑需求受广告努力水平和制造商提供的直接价格折扣的联合影响,建立供应链成员的合作广告努力水平与价格折扣策略模型,通过比较Nash均衡模型、制造商主导的Stackelberg博弈模型、零售商主导的Stackelberg博弈模型和合作博弈模型,得到不同情形下制造商与零售商的最优广告努力水平与制造商提供给消费者的最优价格折扣策略.结果表明,仅当商品价格弹性满足一定水平时,制造商才可能给予消费者一定的价格折扣;价格弹性越大,制造商可给与的直接价格折扣就越大,且消费者在合作博弈下可以得到更实惠的价格.当制造商给予消费者最优直接价格折扣时,制造商和零售商的广告努力积极性与价格弹性正相关.此外,制造商和零售商的广告成本存在比例关系,一方可以通过自身广告成本投入来估算另一方的广告成本.最后,运用帕累托改进对合作博弈下供应链系统的最大利润进行充分协调,以实现供应链参与双方和消费者的“三赢”局面.上述结论对供应链参与方合作模式的选择、最优广告努力水平与直接价格折扣策略的制定具有指导意义.%When the market demand is sensitive to sales price and the advertising efforts,both manufacturer and retailer in a supply chain must make decisions on their advertising effort levels and price discount.Based on the new demand function which is simultaneously affected by advertising efforts and direct price discount offered by the manufacturer to consumers,the optimal advertising efforts strategy of the supply chain and direct price discount of the manufacturer are mainly discussed by comparing four game models,which are Nash Equilibrium,Stackelberg game model in which the manufacturer is leader,Stackelberg game model in which the retailer is leader and cooperative game.The study finds that only when price elasticity meets a certain

  8. Analysis and optimization of gyrokinetic toroidal simulations on homogenous and heterogenous platforms

    International Nuclear Information System (INIS)

    Ibrahim, Khaled Z.; Madduri, Kamesh; Williams, Samuel; Wang, Bei; Oliker, Leonid

    2013-01-01

    The Gyrokinetic Toroidal Code (GTC) uses the particle-in-cell method to efficiently simulate plasma microturbulence. This paper presents novel analysis and optimization techniques to enhance the performance of GTC on large-scale machines. We introduce cell access analysis to better manage locality vs. synchronization tradeoffs on CPU and GPU-based architectures. Finally, our optimized hybrid parallel implementation of GTC uses MPI, OpenMP, and NVIDIA CUDA, achieves up to a 2× speedup over the reference Fortran version on multiple parallel systems, and scales efficiently to tens of thousands of cores.

  9. Genetic particle swarm parallel algorithm analysis of optimization arrangement on mistuned blades

    Science.gov (United States)

    Zhao, Tianyu; Yuan, Huiqun; Yang, Wenjun; Sun, Huagang

    2017-12-01

    This article introduces a method of mistuned parameter identification which consists of static frequency testing of blades, dichotomy and finite element analysis. A lumped parameter model of an engine bladed-disc system is then set up. A bladed arrangement optimization method, namely the genetic particle swarm optimization algorithm, is presented. It consists of a discrete particle swarm optimization and a genetic algorithm. From this, the local and global search ability is introduced. CUDA-based co-evolution particle swarm optimization, using a graphics processing unit, is presented and its performance is analysed. The results show that using optimization results can reduce the amplitude and localization of the forced vibration response of a bladed-disc system, while optimization based on the CUDA framework can improve the computing speed. This method could provide support for engineering applications in terms of effectiveness and efficiency.

  10. Modelling of Rabies Transmission Dynamics Using Optimal Control Analysis

    Directory of Open Access Journals (Sweden)

    Joshua Kiddy K. Asamoah

    2017-01-01

    Full Text Available We examine an optimal way of eradicating rabies transmission from dogs into the human population, using preexposure prophylaxis (vaccination and postexposure prophylaxis (treatment due to public education. We obtain the disease-free equilibrium, the endemic equilibrium, the stability, and the sensitivity analysis of the optimal control model. Using the Latin hypercube sampling (LHS, the forward-backward sweep scheme and the fourth-order Range-Kutta numerical method predict that the global alliance for rabies control’s aim of working to eliminate deaths from canine rabies by 2030 is attainable through mass vaccination of susceptible dogs and continuous use of pre- and postexposure prophylaxis in humans.

  11. Learning from Experiments in Optimization

    DEFF Research Database (Denmark)

    Winthereik, Brit Ross; Jensen, Casper Bruun

    2017-01-01

    This article examines attempts by professionals in the Danish branch of the environmental NGO NatureAid to optimize their practice by developing a local standard. Describing these efforts as an experiment in optimization, we outline a post-critical alternative to critiques that centre on the redu...... of management as ‘broken up;’ as a distributed, ambient activity, variably performed by different actors using different standards....

  12. Efek Keadilan Remunerasi, Kompetensi Atasan dan Kohesivitas Kelompok terhadap Withholding Effort

    Directory of Open Access Journals (Sweden)

    Ida Ayu Kartika Maharani

    2016-12-01

    Full Text Available Withholding effort is a tendency employee to reduce work contribution as the possibility of an individual in giving less than maximum effort on tasks associated with the job. The purpose of this study is to analyze the remuneration fairness influences, supervisor competencies and group cohesiveness on withholding effort. The population in this study was all administrative employees with the status of civil servants and probationary civil servants who were actively working in the Institute Hindu Dharma Negeri Denpasar. The number of respondents were 80 people. The research data was primary data obtained from questionnaires. This study used confirmatory factor analysis and multiple linear regression analysis as analytic technique. The results show that fairness of remuneration has a negative and significant effect on the withholding effort, supervisor competencies has a negative and significant effect on the withholding effort, group cohesiveness  has a negative and significant effect on the withholding effort

  13. Integration of numerical analysis tools for automated numerical optimization of a transportation package design

    International Nuclear Information System (INIS)

    Witkowski, W.R.; Eldred, M.S.; Harding, D.C.

    1994-01-01

    The use of state-of-the-art numerical analysis tools to determine the optimal design of a radioactive material (RAM) transportation container is investigated. The design of a RAM package's components involves a complex coupling of structural, thermal, and radioactive shielding analyses. The final design must adhere to very strict design constraints. The current technique used by cask designers is uncoupled and involves designing each component separately with respect to its driving constraint. With the use of numerical optimization schemes, the complex couplings can be considered directly, and the performance of the integrated package can be maximized with respect to the analysis conditions. This can lead to more efficient package designs. Thermal and structural accident conditions are analyzed in the shape optimization of a simplified cask design. In this paper, details of the integration of numerical analysis tools, development of a process model, nonsmoothness difficulties with the optimization of the cask, and preliminary results are discussed

  14. An optimized color transformation for the analysis of digital images of hematoxylin & eosin stained slides

    Directory of Open Access Journals (Sweden)

    Mark D Zarella

    2015-01-01

    Full Text Available Hematoxylin and eosin (H&E staining is ubiquitous in pathology practice and research. As digital pathology has evolved, the reliance of quantitative methods that make use of H&E images has similarly expanded. For example, cell counting and nuclear morphometry rely on the accurate demarcation of nuclei from other structures and each other. One of the major obstacles to quantitative analysis of H&E images is the high degree of variability observed between different samples and different laboratories. In an effort to characterize this variability, as well as to provide a substrate that can potentially mitigate this factor in quantitative image analysis, we developed a technique to project H&E images into an optimized space more appropriate for many image analysis procedures. We used a decision tree-based support vector machine learning algorithm to classify 44 H&E stained whole slide images of resected breast tumors according to the histological structures that are present. This procedure takes an H&E image as an input and produces a classification map of the image that predicts the likelihood of a pixel belonging to any one of a set of user-defined structures (e.g., cytoplasm, stroma. By reducing these maps into their constituent pixels in color space, an optimal reference vector is obtained for each structure, which identifies the color attributes that maximally distinguish one structure from other elements in the image. We show that tissue structures can be identified using this semi-automated technique. By comparing structure centroids across different images, we obtained a quantitative depiction of H&E variability for each structure. This measurement can potentially be utilized in the laboratory to help calibrate daily staining or identify troublesome slides. Moreover, by aligning reference vectors derived from this technique, images can be transformed in a way that standardizes their color properties and makes them more amenable to image

  15. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  16. Global-Local Analysis and Optimization of a Composite Civil Tilt-Rotor Wing

    Science.gov (United States)

    Rais-Rohani, Masound

    1999-01-01

    This report gives highlights of an investigation on the design and optimization of a thin composite wing box structure for a civil tilt-rotor aircraft. Two different concepts are considered for the cantilever wing: (a) a thin monolithic skin design, and (b) a thick sandwich skin design. Each concept is examined with three different skin ply patterns based on various combinations of 0, +/-45, and 90 degree plies. The global-local technique is used in the analysis and optimization of the six design models. The global analysis is based on a finite element model of the wing-pylon configuration while the local analysis uses a uniformly supported plate representing a wing panel. Design allowables include those on vibration frequencies, panel buckling, and material strength. The design optimization problem is formulated as one of minimizing the structural weight subject to strength, stiffness, and d,vnamic constraints. Six different loading conditions based on three different flight modes are considered in the design optimization. The results of this investigation reveal that of all the loading conditions the one corresponding to the rolling pull-out in the airplane mode is the most stringent. Also the frequency constraints are found to drive the skin thickness limits, rendering the buckling constraints inactive. The optimum skin ply pattern for the monolithic skin concept is found to be (((0/+/-45/90/(0/90)(sub 2))(sub s))(sub s), while for the sandwich skin concept the optimal ply pattern is found to be ((0/+/-45/90)(sub 2s))(sub s).

  17. Optimal sensor placement for leak location in water distribution networks using genetic algorithms.

    Science.gov (United States)

    Casillas, Myrna V; Puig, Vicenç; Garza-Castañón, Luis E; Rosich, Albert

    2013-11-04

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  18. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Luis E. Garza-Castañón

    2013-11-01

    Full Text Available This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs. The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  19. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    Science.gov (United States)

    Casillas, Myrna V.; Puig, Vicenç; Garza-Castañón, Luis E.; Rosich, Albert

    2013-01-01

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach. PMID:24193099

  20. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  1. No Cost – Low Cost Compressed Air System Optimization in Industry

    Science.gov (United States)

    Dharma, A.; Budiarsa, N.; Watiniasih, N.; Antara, N. G.

    2018-04-01

    Energy conservation is a systematic, integrated of effort, in order to preserve energy sources and improve energy utilization efficiency. Utilization of energy in efficient manner without reducing the energy usage it must. Energy conservation efforts are applied at all stages of utilization, from utilization of energy resources to final, using efficient technology, and cultivating an energy-efficient lifestyle. The most common way is to promote energy efficiency in the industry on end use and overcome barriers to achieve such efficiency by using system energy optimization programs. The facts show that energy saving efforts in the process usually only focus on replacing tools and not an overall system improvement effort. In this research, a framework of sustainable energy reduction work in companies that have or have not implemented energy management system (EnMS) will be conducted a systematic technical approach in evaluating accurately a compressed-air system and potential optimization through observation, measurement and verification environmental conditions and processes, then processing the physical quantities of systems such as air flow, pressure and electrical power energy at any given time measured using comparative analysis methods in this industry, to provide the potential savings of energy saving is greater than the component approach, with no cost to the lowest cost (no cost - low cost). The process of evaluating energy utilization and energy saving opportunities will provide recommendations for increasing efficiency in the industry and reducing CO2 emissions and improving environmental quality.

  2. An Integrated Method for Airfoil Optimization

    Science.gov (United States)

    Okrent, Joshua B.

    Design exploration and optimization is a large part of the initial engineering and design process. To evaluate the aerodynamic performance of a design, viscous Navier-Stokes solvers can be used. However this method can prove to be overwhelmingly time consuming when performing an initial design sweep. Therefore, another evaluation method is needed to provide accurate results at a faster pace. To accomplish this goal, a coupled viscous-inviscid method is used. This thesis proposes an integrated method for analyzing, evaluating, and optimizing an airfoil using a coupled viscous-inviscid solver along with a genetic algorithm to find the optimal candidate. The method proposed is different from prior optimization efforts in that it greatly broadens the design space, while allowing the optimization to search for the best candidate that will meet multiple objectives over a characteristic mission profile rather than over a single condition and single optimization parameter. The increased design space is due to the use of multiple parametric airfoil families, namely the NACA 4 series, CST family, and the PARSEC family. Almost all possible airfoil shapes can be created with these three families allowing for all possible configurations to be included. This inclusion of multiple airfoil families addresses a possible criticism of prior optimization attempts since by only focusing on one airfoil family, they were inherently limiting the number of possible airfoil configurations. By using multiple parametric airfoils, it can be assumed that all reasonable airfoil configurations are included in the analysis and optimization and that a global and not local maximum is found. Additionally, the method used is amenable to customization to suit any specific needs as well as including the effects of other physical phenomena or design criteria and/or constraints. This thesis found that an airfoil configuration that met multiple objectives could be found for a given set of nominal

  3. Site suitability analysis and route optimization for solid waste ...

    African Journals Online (AJOL)

    Solid waste management system is a tedious task that is facing both developing and developed countries. Site Suitability analysis and route optimization for solid waste disposal can make waste management cheap and can be used for sustainable development. However, if the disposal site(s) is/are not sited and handle ...

  4. Permanent Magnet Flux-Switching Machine, Optimal Design and Performance Analysis

    Directory of Open Access Journals (Sweden)

    Liviu Emilian Somesan

    2013-01-01

    Full Text Available In this paper an analytical sizing-design procedure for a typical permanent magnet flux-switching machine (PMFSM with 12 stator and respectively 10 rotor poles is presented. An optimal design, based on Hooke-Jeeves method with the objective functions of maximum torque density, is performed. The results were validated via two dimensions finite element analysis (2D-FEA applied on the optimized structure. The influence of the permanent magnet (PM dimensions and type, respectively of the rotor poles' shape on the machine performance were also studied via 2D-FEA.

  5. Multivariate Analysis Techniques for Optimal Vision System Design

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara

    The present thesis considers optimization of the spectral vision systems used for quality inspection of food items. The relationship between food quality, vision based techniques and spectral signature are described. The vision instruments for food analysis as well as datasets of the food items...... used in this thesis are described. The methodological strategies are outlined including sparse regression and pre-processing based on feature selection and extraction methods, supervised versus unsupervised analysis and linear versus non-linear approaches. One supervised feature selection algorithm...... (SSPCA) and DCT based characterization of the spectral diffused reflectance images for wavelength selection and discrimination. These methods together with some other state-of-the-art statistical and mathematical analysis techniques are applied on datasets of different food items; meat, diaries, fruits...

  6. CT Dose Optimization in Pediatric Radiology: A Multiyear Effort to Preserve the Benefits of Imaging While Reducing the Risks.

    Science.gov (United States)

    Greenwood, Taylor J; Lopez-Costa, Rodrigo I; Rhoades, Patrick D; Ramírez-Giraldo, Juan C; Starr, Matthew; Street, Mandie; Duncan, James; McKinstry, Robert C

    2015-01-01

    The marked increase in radiation exposure from medical imaging, especially in children, has caused considerable alarm and spurred efforts to preserve the benefits but reduce the risks of imaging. Applying the principles of the Image Gently campaign, data-driven process and quality improvement techniques such as process mapping and flowcharting, cause-and-effect diagrams, Pareto analysis, statistical process control (control charts), failure mode and effects analysis, "lean" or Six Sigma methodology, and closed feedback loops led to a multiyear program that has reduced overall computed tomographic (CT) examination volume by more than fourfold and concurrently decreased radiation exposure per CT study without compromising diagnostic utility. This systematic approach involving education, streamlining access to magnetic resonance imaging and ultrasonography, auditing with comparison with benchmarks, applying modern CT technology, and revising CT protocols has led to a more than twofold reduction in CT radiation exposure between 2005 and 2012 for patients at the authors' institution while maintaining diagnostic utility. (©)RSNA, 2015.

  7. Bankruptcy Prevention: New Effort to Reflect on Legal and Social Changes.

    Science.gov (United States)

    Kliestik, Tomas; Misankova, Maria; Valaskova, Katarina; Svabova, Lucia

    2018-04-01

    Every corporation has an economic and moral responsibility to its stockholders to perform well financially. However, the number of bankruptcies in Slovakia has been growing for several years without an apparent macroeconomic cause. To prevent a rapid denigration and to prevent the outflow of foreign capital, various efforts are being zealously implemented. Robust analysis using conventional bankruptcy prediction tools revealed that the existing models are adaptable to local conditions, particularly local legislation. Furthermore, it was confirmed that most of these outdated tools have sufficient capability to warn of impending financial problems several years in advance. A novel bankruptcy prediction tool that outperforms the conventional models was developed. However, it is increasingly challenging to predict bankruptcy risk as corporations have become more global and more complex and as they have developed sophisticated schemes to hide their actual situations under the guise of "optimization" for tax authorities. Nevertheless, scepticism remains because economic engineers have established bankruptcy as a strategy to limit the liability resulting from court-imposed penalties.

  8. Effort-Based Decision Making: A Novel Approach for Assessing Motivation in Schizophrenia.

    Science.gov (United States)

    Green, Michael F; Horan, William P; Barch, Deanna M; Gold, James M

    2015-09-01

    Because negative symptoms, including motivational deficits, are a critical unmet need in schizophrenia, there are many ongoing efforts to develop new pharmacological and psychosocial interventions for these impairments. A common challenge of these studies involves how to evaluate and select optimal endpoints. Currently, all studies of negative symptoms in schizophrenia depend on ratings from clinician-conducted interviews. Effort-based decision-making tasks may provide a more objective, and perhaps more sensitive, endpoint for trials of motivational negative symptoms. These tasks assess how much effort a person is willing to exert for a given level of reward. This area has been well-studied with animal models of effort and motivation, and effort-based decision-making tasks have been adapted for use in humans. Very recently, several studies have examined physical and cognitive types of effort-based decision-making tasks in cross-sectional studies of schizophrenia, providing evidence for effort-related impairment in this illness. This article covers the theoretical background on effort-based decision-making tasks to provide a context for the subsequent articles in this theme section. In addition, we review the existing literature of studies using these tasks in schizophrenia, consider some practical challenges in adapting them for use in clinical trials in schizophrenia, and discuss interpretive challenges that are central to these types of tasks. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  9. Preventive maintenance: optimization of time - based discard decisions at the bruce nuclear generating station

    International Nuclear Information System (INIS)

    Doyle, E.K.; Jardine, A.K.S.

    2001-01-01

    The use of various maintenance optimization techniques at Bruce has lead to cost effective preventive maintenance applications for complex systems. As previously reported at ICONE 6 in New Orleans, 1996, several innovative practices reduced Reliability Centered Maintenance costs while maintaining the accuracy of the analysis. The optimization strategy has undergone further evolution and at the present an Integrated Maintenance Program (IMP) is in place where an Expert Panel consisting of all players/experts proceed through each system in a disciplined fashion and reach agreement on all items under a rigorous time frame. It is well known that there are essentially 3 maintenance based actions that can flow from a Maintenance Optimization Analysis: condition based maintenance, time based maintenance and time based discard. The present effort deals with time based discard decisions. Maintenance data from the Remote On-Power Fuel Changing System was used. (author)

  10. 3rd International Conference on Numerical Analysis and Optimization : Theory, Methods, Applications and Technology Transfer

    CERN Document Server

    Grandinetti, Lucio; Purnama, Anton

    2015-01-01

    Presenting the latest findings in the field of numerical analysis and optimization, this volume balances pure research with practical applications of the subject. Accompanied by detailed tables, figures, and examinations of useful software tools, this volume will equip the reader to perform detailed and layered analysis of complex datasets. Many real-world complex problems can be formulated as optimization tasks. Such problems can be characterized as large scale, unconstrained, constrained, non-convex, non-differentiable, and discontinuous, and therefore require adequate computational methods, algorithms, and software tools. These same tools are often employed by researchers working in current IT hot topics such as big data, optimization and other complex numerical algorithms on the cloud, devising special techniques for supercomputing systems. The list of topics covered include, but are not limited to: numerical analysis, numerical optimization, numerical linear algebra, numerical differential equations, opt...

  11. AITSO: A Tool for Spatial Optimization Based on Artificial Immune Systems

    Science.gov (United States)

    Zhao, Xiang; Liu, Yaolin; Liu, Dianfeng; Ma, Xiaoya

    2015-01-01

    A great challenge facing geocomputation and spatial analysis is spatial optimization, given that it involves various high-dimensional, nonlinear, and complicated relationships. Many efforts have been made with regard to this specific issue, and the strong ability of artificial immune system algorithms has been proven in previous studies. However, user-friendly professional software is still unavailable, which is a great impediment to the popularity of artificial immune systems. This paper describes a free, universal tool, named AITSO, which is capable of solving various optimization problems. It provides a series of standard application programming interfaces (APIs) which can (1) assist researchers in the development of their own problem-specific application plugins to solve practical problems and (2) allow the implementation of some advanced immune operators into the platform to improve the performance of an algorithm. As an integrated, flexible, and convenient tool, AITSO contributes to knowledge sharing and practical problem solving. It is therefore believed that it will advance the development and popularity of spatial optimization in geocomputation and spatial analysis. PMID:25678911

  12. AITSO: A Tool for Spatial Optimization Based on Artificial Immune Systems

    Directory of Open Access Journals (Sweden)

    Xiang Zhao

    2015-01-01

    Full Text Available A great challenge facing geocomputation and spatial analysis is spatial optimization, given that it involves various high-dimensional, nonlinear, and complicated relationships. Many efforts have been made with regard to this specific issue, and the strong ability of artificial immune system algorithms has been proven in previous studies. However, user-friendly professional software is still unavailable, which is a great impediment to the popularity of artificial immune systems. This paper describes a free, universal tool, named AITSO, which is capable of solving various optimization problems. It provides a series of standard application programming interfaces (APIs which can (1 assist researchers in the development of their own problem-specific application plugins to solve practical problems and (2 allow the implementation of some advanced immune operators into the platform to improve the performance of an algorithm. As an integrated, flexible, and convenient tool, AITSO contributes to knowledge sharing and practical problem solving. It is therefore believed that it will advance the development and popularity of spatial optimization in geocomputation and spatial analysis.

  13. Optimization of Interior Permanent Magnet Motor by Quality Engineering and Multivariate Analysis

    Science.gov (United States)

    Okada, Yukihiro; Kawase, Yoshihiro

    This paper has described the method of optimization based on the finite element method. The quality engineering and the multivariable analysis are used as the optimization technique. This optimizing method consists of two steps. At Step.1, the influence of parameters for output is obtained quantitatively, at Step.2, the number of calculation by the FEM can be cut down. That is, the optimal combination of the design parameters, which satisfies the required characteristic, can be searched for efficiently. In addition, this method is applied to a design of IPM motor to reduce the torque ripple. The final shape can maintain average torque and cut down the torque ripple 65%. Furthermore, the amount of permanent magnets can be reduced.

  14. Generalized canonical analysis based on optimizing matrix correlations and a relation with IDIOSCAL

    NARCIS (Netherlands)

    Kiers, Henk A.L.; Cléroux, R.; Ten Berge, Jos M.F.

    1994-01-01

    Carroll's method for generalized canonical analysis of two or more sets of variables is shown to optimize the sum of squared inner-product matrix correlations between a consensus matrix and matrices with canonical variates for each set of variables. In addition, the method that analogously optimizes

  15. Analysis of Optimal Operation of an Energy Integrated Distillation Plant

    DEFF Research Database (Denmark)

    Li, Hong Wen; Hansen, C.A.; Gani, Rafiqul

    2003-01-01

    The efficiency of manufacturing systems can be significantly increased through diligent application of control based on mathematical models thereby enabling more tight integration of decision making with systems operation. In the present paper analysis of optimal operation of an energy integrated...

  16. A Composite Contract for Coordinating a Supply Chain with Price and Effort Dependent Stochastic Demand

    Directory of Open Access Journals (Sweden)

    Yu-Shuang Liu

    2016-01-01

    Full Text Available As the demand is more sensitive to price and sales effort, this paper investigates the issue of channel coordination for a supply chain with one manufacturer and one retailer facing price and effort dependent stochastic demand. A composite contract based on the quantity-restricted returns and target sales rebate can achieve coordination in this setting. Two main problems are addressed: (1 how to coordinate the decentralized supply chain; (2 how to determine the optimal sales effort level, pricing, and inventory decisions under the additive demand case. Numerical examples are presented to verify the effectiveness of combined contract in supply chain coordination and highlight model sensitivities to parametric changes.

  17. Nonlinear Thermodynamic Analysis and Optimization of a Carnot Engine Cycle

    Directory of Open Access Journals (Sweden)

    Michel Feidt

    2016-06-01

    Full Text Available As part of the efforts to unify the various branches of Irreversible Thermodynamics, the proposed work reconsiders the approach of the Carnot engine taking into account the finite physical dimensions (heat transfer conductances and the finite speed of the piston. The models introduce the irreversibility of the engine by two methods involving different constraints. The first method introduces the irreversibility by a so-called irreversibility ratio in the entropy balance applied to the cycle, while in the second method it is emphasized by the entropy generation rate. Various forms of heat transfer laws are analyzed, but most of the results are given for the case of the linear law. Also, individual cases are studied and reported in order to provide a simple analytical form of the results. The engine model developed allowed a formal optimization using the calculus of variations.

  18. Sensitivity analysis in dynamic optimization

    NARCIS (Netherlands)

    Evers, A.H.

    1980-01-01

    To find the optimal control of chemical processes, Pontryagin's minimum principle can be used. In practice, however, one is not only interested in the optimal solution, which satisfies the restrictions on the control, the initial and terminal conditions, and the process parameters. It is also

  19. Optimization and Data Analysis in Biomedical Informatics

    CERN Document Server

    Pardalos, Panos M; Xanthopoulos, Petros

    2012-01-01

    This volume covers some of the topics that are related to the rapidly growing field of biomedical informatics. In June 11-12, 2010 a workshop entitled 'Optimization and Data Analysis in Biomedical Informatics' was organized at The Fields Institute. Following this event invited contributions were gathered based on the talks presented at the workshop, and additional invited chapters were chosen from world's leading experts. In this publication, the authors share their expertise in the form of state-of-the-art research and review chapters, bringing together researchers from different disciplines

  20. Thermal resistance analysis and optimization of photovoltaic-thermoelectric hybrid system

    International Nuclear Information System (INIS)

    Yin, Ershuai; Li, Qiang; Xuan, Yimin

    2017-01-01

    Highlights: • A detailed thermal resistance analysis of the PV-TE hybrid system is proposed. • c-Si PV and p-Si PV cells are proved to be inapplicable for the PV-TE hybrid system. • Some criteria for selecting coupling devices and optimal design are obtained. • A detailed process of designing the practical PV-TE hybrid system is provided. - Abstract: The thermal resistance theory is introduced into the theoretical model of the photovoltaic-thermoelectric (PV-TE) hybrid system. A detailed thermal resistance analysis is proposed to optimize the design of the coupled system in terms of optimal total conversion efficiency. Systems using four types of photovoltaic cells are investigated, including monocrystalline silicon photovoltaic cell, polycrystalline silicon photovoltaic cell, amorphous silicon photovoltaic cell and polymer photovoltaic cell. Three cooling methods, including natural cooling, forced air cooling and water cooling, are compared, which demonstrates a significant superiority of water cooling for the concentrating photovoltaic-thermoelectric hybrid system. Influences of the optical concentrating ratio and velocity of water are studied together and the optimal values are revealed. The impacts of the thermal resistances of the contact surface, TE generator and the upper heat loss thermal resistance on the property of the coupled system are investigated, respectively. The results indicate that amorphous silicon PV cell and polymer PV cell are more appropriate for the concentrating hybrid system. Enlarging the thermal resistance of the thermoelectric generator can significantly increase the performance of the coupled system using amorphous silicon PV cell or polymer PV cell.

  1. Analysis and optimization of bellows with general shape

    International Nuclear Information System (INIS)

    Koh, B.K.; Park, G.J.

    1998-01-01

    Bellows are commonly used in piping systems to absorb expansion and contraction in order to reduce stress. They have widespread applications which include industrial and chemical plants, fossil and nuclear power systems, heating and cooling systems, and vehicle exhaust systems. A bellows is a component in piping systems which absorbs mechanical deformation with flexibility. Its geometry is an axially symmetric shell which consists of two toroidal shells and one annular plate or conical shell. In order to analyze the bellows, this study presents the finite element analysis using a conical frustum shell element. A finite element analysis program is developed to analyze various bellows. The formula for calculating the natural frequency of bellows is made by the simple beam theory. The formula for fatigue life is also derived by experiments. A shape optimal design problem is formulated using multiple objective optimization. The multiple objective functions are transformed to a scalar function with weighting factors. The stiffness, strength, and specified stiffness are considered as the multiple objective function. The formulation has inequality constraints imposed on the natural frequencies, the fatigue limit, and the manufacturing conditions. Geometric parameters of bellows are the design variables. The recursive quadratic programming algorithm is utilized to solve the problem

  2. Efficient Solutions and Cost-Optimal Analysis for Existing School Buildings

    Directory of Open Access Journals (Sweden)

    Paolo Maria Congedo

    2016-10-01

    Full Text Available The recast of the energy performance of buildings directive (EPBD describes a comparative methodological framework to promote energy efficiency and establish minimum energy performance requirements in buildings at the lowest costs. The aim of the cost-optimal methodology is to foster the achievement of nearly zero energy buildings (nZEBs, the new target for all new buildings by 2020, characterized by a high performance with a low energy requirement almost covered by renewable sources. The paper presents the results of the application of the cost-optimal methodology in two existing buildings located in the Mediterranean area. These buildings are a kindergarten and a nursery school that differ in construction period, materials and systems. Several combinations of measures have been applied to derive cost-effective efficient solutions for retrofitting. The cost-optimal level has been identified for each building and the best performing solutions have been selected considering both a financial and a macroeconomic analysis. The results illustrate the suitability of the methodology to assess cost-optimality and energy efficiency in school building refurbishment. The research shows the variants providing the most cost-effective balance between costs and energy saving. The cost-optimal solution reduces primary energy consumption by 85% and gas emissions by 82%–83% in each reference building.

  3. Accuracy Analysis and Parameters Optimization in Urban Flood Simulation by PEST Model

    Science.gov (United States)

    Keum, H.; Han, K.; Kim, H.; Ha, C.

    2017-12-01

    The risk of urban flooding has been increasing due to heavy rainfall, flash flooding and rapid urbanization. Rainwater pumping stations, underground reservoirs are used to actively take measures against flooding, however, flood damage from lowlands continues to occur. Inundation in urban areas has resulted in overflow of sewer. Therefore, it is important to implement a network system that is intricately entangled within a city, similar to the actual physical situation and accurate terrain due to the effects on buildings and roads for accurate two-dimensional flood analysis. The purpose of this study is to propose an optimal scenario construction procedure watershed partitioning and parameterization for urban runoff analysis and pipe network analysis, and to increase the accuracy of flooded area prediction through coupled model. The establishment of optimal scenario procedure was verified by applying it to actual drainage in Seoul. In this study, optimization was performed by using four parameters such as Manning's roughness coefficient for conduits, watershed width, Manning's roughness coefficient for impervious area, Manning's roughness coefficient for pervious area. The calibration range of the parameters was determined using the SWMM manual and the ranges used in the previous studies, and the parameters were estimated using the automatic calibration method PEST. The correlation coefficient showed a high correlation coefficient for the scenarios using PEST. The RPE and RMSE also showed high accuracy for the scenarios using PEST. In the case of RPE, error was in the range of 13.9-28.9% in the no-parameter estimation scenarios, but in the scenario using the PEST, the error range was reduced to 6.8-25.7%. Based on the results of this study, it can be concluded that more accurate flood analysis is possible when the optimum scenario is selected by determining the appropriate reference conduit for future urban flooding analysis and if the results is applied to various

  4. Capital Cost Optimization for Prefabrication: A Factor Analysis Evaluation Model

    Directory of Open Access Journals (Sweden)

    Hong Xue

    2018-01-01

    Full Text Available High capital cost is a significant hindrance to the promotion of prefabrication. In order to optimize cost management and reduce capital cost, this study aims to explore the latent factors and factor analysis evaluation model. Semi-structured interviews were conducted to explore potential variables and then questionnaire survey was employed to collect professionals’ views on their effects. After data collection, exploratory factor analysis was adopted to explore the latent factors. Seven latent factors were identified, including “Management Index”, “Construction Dissipation Index”, “Productivity Index”, “Design Efficiency Index”, “Transport Dissipation Index”, “Material increment Index” and “Depreciation amortization Index”. With these latent factors, a factor analysis evaluation model (FAEM, divided into factor analysis model (FAM and comprehensive evaluation model (CEM, was established. The FAM was used to explore the effect of observed variables on the high capital cost of prefabrication, while the CEM was used to evaluate comprehensive cost management level on prefabrication projects. Case studies were conducted to verify the models. The results revealed that collaborative management had a positive effect on capital cost of prefabrication. Material increment costs and labor costs had significant impacts on production cost. This study demonstrated the potential of on-site management and standardization design to reduce capital cost. Hence, collaborative management is necessary for cost management of prefabrication. Innovation and detailed design were needed to improve cost performance. The new form of precast component factories can be explored to reduce transportation cost. Meanwhile, targeted strategies can be adopted for different prefabrication projects. The findings optimized the capital cost and improved the cost performance through providing an evaluation and optimization model, which helps managers to

  5. Effects of fishing effort allocation scenarios on energy efficiency and profitability: an individual-based model applied to Danish fisheries

    DEFF Research Database (Denmark)

    Bastardie, Francois; Nielsen, J. Rasmus; Andersen, Bo Sølgaard

    2010-01-01

    to the harbour, and (C) allocating effort towards optimising the expected area-specific profit per trip. The model is informed by data from each Danish fishing vessel >15 m after coupling its high resolution spatial and temporal effort data (VMS) with data from logbook landing declarations, sales slips, vessel...... effort allocation has actually been sub-optimal because increased profits from decreased fuel consumption and larger landings could have been obtained by applying a different spatial effort allocation. Based on recent advances in VMS and logbooks data analyses, this paper contributes to improve...

  6. Optimal Taxation under Income Uncertainty

    OpenAIRE

    Xianhua Dai

    2011-01-01

    Optimal taxation under income uncertainty has been extensively developed in expected utility theory, but it is still open for inseparable utility function between income and effort. As an alternative of decision-making under uncertainty, prospect theory (Kahneman and Tversky (1979), Tversky and Kahneman (1992)) has been obtained empirical support, for example, Kahneman and Tversky (1979), and Camerer and Lowenstein (2003). It is beginning to explore optimal taxation in the context of prospect...

  7. Sensitivity analysis and optimization algorithms for 3D forging process design

    International Nuclear Information System (INIS)

    Do, T.T.; Fourment, L.; Laroussi, M.

    2004-01-01

    This paper presents several approaches for preform shape optimization in 3D forging. The process simulation is carried out using the FORGE3 registered finite element software, and the optimization problem regards the shape of initial axisymmetrical preforms. Several objective functions are considered, like the forging energy, the forging force or a surface defect criterion. Both deterministic and stochastic optimization algorithms are tested for 3D applications. The deterministic approach uses the sensitivity analysis that provides the gradient of the objective function. It is obtained by the adjoint-state method and semi-analytical differentiation. The study of stochastic approaches aims at comparing genetic algorithms and evolution strategies. Numerical results show the feasibility of such approaches, i.e. the achieving of satisfactory solutions within a limited number of 3D simulations, less than fifty. For a more industrial problem, the forging of a gear, encouraging optimization results are obtained

  8. Multidisciplinary Inverse Reliability Analysis Based on Collaborative Optimization with Combination of Linear Approximations

    Directory of Open Access Journals (Sweden)

    Xin-Jia Meng

    2015-01-01

    Full Text Available Multidisciplinary reliability is an important part of the reliability-based multidisciplinary design optimization (RBMDO. However, it usually has a considerable amount of calculation. The purpose of this paper is to improve the computational efficiency of multidisciplinary inverse reliability analysis. A multidisciplinary inverse reliability analysis method based on collaborative optimization with combination of linear approximations (CLA-CO is proposed in this paper. In the proposed method, the multidisciplinary reliability assessment problem is first transformed into a problem of most probable failure point (MPP search of inverse reliability, and then the process of searching for MPP of multidisciplinary inverse reliability is performed based on the framework of CLA-CO. This method improves the MPP searching process through two elements. One is treating the discipline analyses as the equality constraints in the subsystem optimization, and the other is using linear approximations corresponding to subsystem responses as the replacement of the consistency equality constraint in system optimization. With these two elements, the proposed method realizes the parallel analysis of each discipline, and it also has a higher computational efficiency. Additionally, there are no difficulties in applying the proposed method to problems with nonnormal distribution variables. One mathematical test problem and an electronic packaging problem are used to demonstrate the effectiveness of the proposed method.

  9. Multi-objective optimization and grey relational analysis on configurations of organic Rankine cycle

    International Nuclear Information System (INIS)

    Wang, Y.Z.; Zhao, J.; Wang, Y.; An, Q.S.

    2017-01-01

    Highlights: • Pareto frontier is an effective way to make comprehensive comparison of ORC. • Comprehensive performance from energy and economics of basic ORC is the best. • R141b shows the best comprehensive performance from energy and economics. - Abstract: Concerning the comprehensive performance of organic Rankine cycle (ORC), comparisons and optimizations on 3 different configurations of ORC (basic, regenerative and extractive ORCs) are investigated in this paper. Medium-temperature geothermal water is used for comparing the influence of configurations, working fluids and operating parameters on different evaluation criteria. Different evaluation and optimization methods are adopted in evaluation of ORCs to obtain the one with the best comprehensive performance, such as exergoeconomic analysis, bi-objective optimization and grey relational analysis. The results reveal that the basic ORC performs the best among these 3 ORCs in terms of comprehensive thermodynamic and economic performances when using R245fa and driven by geothermal water at 150 °C. Furthermore, R141b shows the best comprehensive performance among 14 working fluids based on the Pareto frontier solutions without considering safe factors. Meanwhile, R141b is the best among all 14 working fluids with the optimal comprehensive performance when regarding all the evaluation criteria as equal by using grey relational analysis.

  10. Sharing Information among Various Organizations in Relief Efforts

    National Research Council Canada - National Science Library

    Costur, Gurkan

    2005-01-01

    .... An analysis is presented of the December 2004 Indian Ocean tsunami relief effort; specifically, how different organizations such as the military, United Nations, and non-governmental organizations...

  11. Experimental analysis of electro-pneumatic optimization of hot stamping machine control systems with on-delay timer

    OpenAIRE

    Bankole I. Oladapo; Vincent A. Balogun; Adeyinka O.M. Adeoye; Ige E. Olubunmi; Samuel O. Afolabi

    2017-01-01

    The sustainability criterion in the manufacturing industries is imperative, especially in the automobile industries. Currently, efforts are being made by the industries to mitigate CO2 emission by the total vehicle weight optimization, machine utilization and resource efficiency. In lieu of this, it is important to understudy the manufacturing machines adopted in the automobile industries. One of such machine is the hot stamping machine that is used for about 35% of the manufacturing operatio...

  12. Implementation and Optimization of GPU-Based Static State Security Analysis in Power Systems

    Directory of Open Access Journals (Sweden)

    Yong Chen

    2017-01-01

    Full Text Available Static state security analysis (SSSA is one of the most important computations to check whether a power system is in normal and secure operating state. It is a challenge to satisfy real-time requirements with CPU-based concurrent methods due to the intensive computations. A sensitivity analysis-based method with Graphics processing unit (GPU is proposed for power systems, which can reduce calculation time by 40% compared to the execution on a 4-core CPU. The proposed method involves load flow analysis and sensitivity analysis. In load flow analysis, a multifrontal method for sparse LU factorization is explored on GPU through dynamic frontal task scheduling between CPU and GPU. The varying matrix operations during sensitivity analysis on GPU are highly optimized in this study. The results of performance evaluations show that the proposed GPU-based SSSA with optimized matrix operations can achieve a significant reduction in computation time.

  13. Analysis and optimal design of an underactuated finger mechanism for LARM hand

    Science.gov (United States)

    Yao, Shuangji; Ceccarelli, Marco; Carbone, Giuseppe; Zhan, Qiang; Lu, Zhen

    2011-09-01

    This paper aims to present general design considerations and optimality criteria for underactuated mechanisms in finger designs. Design issues related to grasping task of robotic fingers are discussed. Performance characteristics are outlined as referring to several aspects of finger mechanisms. Optimality criteria of the finger performances are formulated after careful analysis. A general design algorithm is summarized and formulated as a suitable multi-objective optimization problem. A numerical case of an underactuated robot finger design for Laboratory of Robotics and Mechatronics (LARM) hand is illustrated with the aim to show the practical feasibility of the proposed concepts and computations.

  14. ICRF array module development and optimization for high power density

    International Nuclear Information System (INIS)

    Ryan, P.M.; Swain, D.W.

    1997-02-01

    This report describes the analysis and optimization of the proposed International Thermonuclear Experimental Reactor (ITER) Antenna Array for the ion cyclotron range of frequencies (ICRF). The objectives of this effort were to: (1) minimize the applied radiofrequency rf voltages occurring in vacuum by proper layout and shape of components, limit the component's surface/volumes where the rf voltage is high; (2) study the effects of magnetic insulation, as applied to the current design; (3) provide electrical characteristics of the antenna for the development and analysis of tuning, arc detection/suppression, and systems for discriminating between arcs and edge-localized modes (ELMs); (4) maintain close interface with mechanical design

  15. Analysis Of Factors Affecting Demand Red Chili Pepper Capsicum Annum L In Solok And Effort Fulfillment

    Directory of Open Access Journals (Sweden)

    Zulfitriyana

    2015-08-01

    Full Text Available Research on the analysis of the factors that influence the demand for red chilli Capsicum annuum L in Solok and compliance efforts implemented in March s.d April 2016. The purpose of this study consisted of 1 analyze the factors affecting the demand for red chili in Solok 2 analyze the elasticity of demand for red chili in Solok 3 know the effort that can be done to meet the demand of red chilli in Solok. To achieve the objectives of the first and second use secondary data for 15 fifteen years and to achieve the objectives the third used primary data. The method used is descriptive analytical method a method that is used to describe phenomena that exist which takes place in the present or past. The variables were observed in this study is the X1 price of red chilli X2 the price of green chili X3 onion prices X4 population X5 income and Y the number of requests red chili which is then analyzed by multiple linear regression elasticity of demand and SWOT. The results of that research addressing the factors that influence the demand for red chili in Solok is the price of red chilli itself the price of green chili as a substitute goods the number of population and income while onion prices affect the amount of red chili demand in Solok. But simultaneously variable X1 red chili prices X2 the price of green chili X3 onion prices X4 population and X5 income strongly influence demand red chili in Solok where the F test results show that F count F table 212.262 3600 with a significance level 0.000 0.010 and the most influential variable is the variable X4 population with the greatest value of beta Coefficients is 1100. Based on analysis of the elasticity of demand is known that red chili pepper is a normal good is inelastic to price elasticity coefficient value amp603p of -0.120. Green chili is substituting goods and shallots are complements of red chili with cross elasticity coefficient amp603px1 and amp603px2 respectively by 0293 and -0.635. While the

  16. Optimal Cost Avoidance Investment and Pricing Strategies for Performance-Based Post-Production Service Contracts

    Science.gov (United States)

    2011-04-30

    a BS degree in Mathematics and an MS degree in Statistics and Financial and Actuarial Mathematics from Kiev National Taras Shevchenko University...degrees from Rutgers University in Industrial Engineering (PhD and MS) and Statistics (MS) and from Universidad Nacional Autonoma de Mexico in Actuarial ...Science. His research efforts focus on developing mathematical models for the analysis, computation, and optimization of system performance with

  17. "Real-time" disintegration analysis and D-optimal experimental design for the optimization of diclofenac sodium fast-dissolving films.

    Science.gov (United States)

    El-Malah, Yasser; Nazzal, Sami

    2013-01-01

    The objective of this work was to study the dissolution and mechanical properties of fast-dissolving films prepared from a tertiary mixture of pullulan, polyvinylpyrrolidone and hypromellose. Disintegration studies were performed in real-time by probe spectroscopy to detect the onset of film disintegration. Tensile strength and elastic modulus of the films were measured by texture analysis. Disintegration time of the films ranged from 21 to 105 seconds whereas their mechanical properties ranged from approximately 2 to 49 MPa for tensile strength and 1 to 21 MPa% for young's modulus. After generating polynomial models correlating the variables using a D-Optimal mixture design, an optimal formulation with desired responses was proposed by the statistical package. For validation, a new film formulation loaded with diclofenac sodium based on the optimized composition was prepared and tested for dissolution and tensile strength. Dissolution of the optimized film was found to commence almost immediately with 50% of the drug released within one minute. Tensile strength and young's modulus of the film were 11.21 MPa and 6, 78 MPa%, respectively. Real-time spectroscopy in conjunction with statistical design were shown to be very efficient for the optimization and development of non-conventional intraoral delivery system such as fast dissolving films.

  18. Low Carbon-Oriented Optimal Reliability Design with Interval Product Failure Analysis and Grey Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Yixiong Feng

    2017-03-01

    Full Text Available The problem of large amounts of carbon emissions causes wide concern across the world, and it has become a serious threat to the sustainable development of the manufacturing industry. The intensive research into technologies and methodologies for green product design has significant theoretical meaning and practical value in reducing the emissions of the manufacturing industry. Therefore, a low carbon-oriented product reliability optimal design model is proposed in this paper: (1 The related expert evaluation information was prepared in interval numbers; (2 An improved product failure analysis considering the uncertain carbon emissions of the subsystem was performed to obtain the subsystem weight taking the carbon emissions into consideration. The interval grey correlation analysis was conducted to obtain the subsystem weight taking the uncertain correlations inside the product into consideration. Using the above two kinds of subsystem weights and different caution indicators of the decision maker, a series of product reliability design schemes is available; (3 The interval-valued intuitionistic fuzzy sets (IVIFSs were employed to select the optimal reliability and optimal design scheme based on three attributes, namely, low carbon, correlation and functions, and economic cost. The case study of a vertical CNC lathe proves the superiority and rationality of the proposed method.

  19. Optimization of analytical techniques to characterize antibiotics in aquatic systems

    International Nuclear Information System (INIS)

    Al Mokh, S.

    2013-01-01

    Antibiotics are considered as pollutants when they are present in aquatic ecosystems, ultimate receptacles of anthropogenic substances. These compounds are studied as their persistence in the environment or their effects on natural organisms. Numerous efforts have been made worldwide to assess the environmental quality of different water resources for the survival of aquatic species, but also for human consumption and health risk related. Towards goal, the optimization of analytical techniques for these compounds in aquatic systems remains a necessity. Our objective is to develop extraction and detection methods for 12 molecules of aminoglycosides and colistin in sewage treatment plants and hospitals waters. The lack of analytical methods for analysis of these compounds and the deficiency of studies for their detection in water is the reason for their study. Solid Phase Extraction (SPE) in classic mode (offline) or online followed by Liquid Chromatography analysis coupled with Mass Spectrometry (LC/MS/MS) is the most method commonly used for this type of analysis. The parameters are optimized and validated to ensure the best conditions for the environmental analysis. This technique was applied to real samples of wastewater treatment plants in Bordeaux and Lebanon. (author)

  20. The value of the wechsler intelligence scale for children-fourth edition digit span as an embedded measure of effort: an investigation into children with dual diagnoses.

    Science.gov (United States)

    Loughan, Ashlee R; Perna, Robert; Hertza, Jeremy

    2012-11-01

    The Test of Memory Malingering (TOMM) is a measure of test-taking effort which has traditionally been utilized with adults, but which more recently has demonstrated utility with children. The purpose of this study was to investigate whether the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) Digit Span, commonly used in neuropsychological evaluations, can also be functional as an embedded measure by detecting effort in children with dual diagnoses; a population yet to be investigated. Participants (n = 51) who completed neuropsychological evaluations including the TOMM, WISC-IV, Wisconsin Card Sorting Test, Children's Memory Scale, and Delis-Kaplan Executive Function System were divided into two groups: Optimal Effort and Suboptimal Effort, based on their TOMM Trial 2 scores. Digit Span findings suggest a useful scaled score of ≤4 resulted in optimal cutoff scores, yielding specificity of 91% and sensitivity of 43%. This study supports previous research that the WISC-IV Digit Span has good utility in determining optimal effort, even in children with dual diagnosis or comorbidities.

  1. Replica analysis for the duality of the portfolio optimization problem.

    Science.gov (United States)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  2. Replica analysis for the duality of the portfolio optimization problem

    Science.gov (United States)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  3. Application of multi response optimization with grey relational analysis and fuzzy logic method

    Science.gov (United States)

    Winarni, Sri; Wahyu Indratno, Sapto

    2018-01-01

    Multi-response optimization is an optimization process by considering multiple responses simultaneously. The purpose of this research is to get the optimum point on multi-response optimization process using grey relational analysis and fuzzy logic method. The optimum point is determined from the Fuzzy-GRG (Grey Relational Grade) variable which is the conversion of the Signal to Noise Ratio of the responses involved. The case study used in this research are case optimization of electrical process parameters in electrical disharge machining. It was found that the combination of treatments resulting to optimum MRR and SR was a 70 V gap voltage factor, peak current 9 A and duty factor 0.8.

  4. Development of a Multi-Event Trajectory Optimization Tool for Noise-Optimized Approach Route Design

    NARCIS (Netherlands)

    Braakenburg, M.L.; Hartjes, S.; Visser, H.G.; Hebly, S.J.

    2011-01-01

    This paper presents preliminary results from an ongoing research effort towards the development of a multi-event trajectory optimization methodology that allows to synthesize RNAV approach routes that minimize a cumulative measure of noise, taking into account the total noise effect aggregated for

  5. Radiograph and passive data analysis using mixed variable optimization

    Science.gov (United States)

    Temple, Brian A.; Armstrong, Jerawan C.; Buescher, Kevin L.; Favorite, Jeffrey A.

    2015-06-02

    Disclosed herein are representative embodiments of methods, apparatus, and systems for performing radiography analysis. For example, certain embodiments perform radiographic analysis using mixed variable computation techniques. One exemplary system comprises a radiation source, a two-dimensional detector for detecting radiation transmitted through a object between the radiation source and detector, and a computer. In this embodiment, the computer is configured to input the radiographic image data from the two-dimensional detector and to determine one or more materials that form the object by using an iterative analysis technique that selects the one or more materials from hierarchically arranged solution spaces of discrete material possibilities and selects the layer interfaces from the optimization of the continuous interface data.

  6. Experimental design for optimizing MALDI-TOF-MS analysis of palladium complexes

    Directory of Open Access Journals (Sweden)

    Rakić-Kostić Tijana M.

    2017-01-01

    Full Text Available This paper presents optimization of matrix-assisted laser desorption/ionization (MALDI time-of-flight (TOF mass spectrometer (MS instrumental parameters for the analysis of chloro(2,2'',2"-terpyridinepalladium(II chloride dihydrate complex applying design of experiments methodology (DoE. This complex is of interest for potential use in the cancer therapy. DoE methodology was proved to succeed in optimization of many complex analytical problems. However, it has been poorly used for MALDI-TOF-MS optimization up to now. The theoretical mathematical relationships which explain the influence of important experimental factors (laser energy, grid voltage and number of laser shots on the selected responses (signal to noise – S/N ratio and the resolution – R of the leading peak is established. The optimal instrumental settings providing maximal S/N and R are identified and experimentally verified. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. 172052 and Grant no. 172011

  7. Roofline Analysis in the Intel® Advisor to Deliver Optimized Performance for applications on Intel® Xeon Phi™ Processor

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, Tuomas S.; Lobet, Mathieu; Deslippe, Jack; Matveev, Zakhar

    2017-05-23

    In this session we show, in two case studies, how the roofline feature of Intel Advisor has been utilized to optimize the performance of kernels of the XGC1 and PICSAR codes in preparation for Intel Knights Landing architecture. The impact of the implemented optimizations and the benefits of using the automatic roofline feature of Intel Advisor to study performance of large applications will be presented. This demonstrates an effective optimization strategy that has enabled these science applications to achieve up to 4.6 times speed-up and prepare for future exascale architectures. # Goal/Relevance of Session The roofline model [1,2] is a powerful tool for analyzing the performance of applications with respect to the theoretical peak achievable on a given computer architecture. It allows one to graphically represent the performance of an application in terms of operational intensity, i.e. the ratio of flops performed and bytes moved from memory in order to guide optimization efforts. Given the scale and complexity of modern science applications, it can often be a tedious task for the user to perform the analysis on the level of functions or loops to identify where performance gains can be made. With new Intel tools, it is now possible to automate this task, as well as base the estimates of peak performance on measurements rather than vendor specifications. The goal of this session is to demonstrate how the roofline feature of Intel Advisor can be used to balance memory vs. computation related optimization efforts and effectively identify performance bottlenecks. A series of typical optimization techniques: cache blocking, structure refactoring, data alignment, and vectorization illustrated by the kernel cases will be addressed. # Description of the codes ## XGC1 The XGC1 code [3] is a magnetic fusion Particle-In-Cell code that uses an unstructured mesh for its Poisson solver that allows it to accurately resolve the edge plasma of a magnetic fusion device. After

  8. Analysis of recruitment and industrial human resources management for optimal productivity in the presence of the HIV/AIDS epidemic.

    Science.gov (United States)

    Okosun, Kazeem O; Makinde, Oluwole D; Takaidza, Isaac

    2013-01-01

    The aim of this paper is to analyze the recruitment effects of susceptible and infected individuals in order to assess the productivity of an organizational labor force in the presence of HIV/AIDS with preventive and HAART treatment measures in enhancing the workforce output. We consider constant controls as well as time-dependent controls. In the constant control case, we calculate the basic reproduction number and investigate the existence and stability of equilibria. The model is found to exhibit backward and Hopf bifurcations, implying that for the disease to be eradicated, the basic reproductive number must be below a critical value of less than one. We also investigate, by calculating sensitivity indices, the sensitivity of the basic reproductive number to the model's parameters. In the time-dependent control case, we use Pontryagin's maximum principle to derive necessary conditions for the optimal control of the disease. Finally, numerical simulations are performed to illustrate the analytical results. The cost-effectiveness analysis results show that optimal efforts on recruitment (HIV screening of applicants, etc.) is not the most cost-effective strategy to enhance productivity in the organizational labor force. Hence, to enhance employees' productivity, effective education programs and strict adherence to preventive measures should be promoted.

  9. Testing effort dependent software reliability model for imperfect debugging process considering both detection and correction

    International Nuclear Information System (INIS)

    Peng, R.; Li, Y.F.; Zhang, W.J.; Hu, Q.P.

    2014-01-01

    This paper studies the fault detection process (FDP) and fault correction process (FCP) with the incorporation of testing effort function and imperfect debugging. In order to ensure high reliability, it is essential for software to undergo a testing phase, during which faults can be detected and corrected by debuggers. The testing resource allocation during this phase, which is usually depicted by the testing effort function, considerably influences not only the fault detection rate but also the time to correct a detected fault. In addition, testing is usually far from perfect such that new faults may be introduced. In this paper, we first show how to incorporate testing effort function and fault introduction into FDP and then develop FCP as delayed FDP with a correction effort. Various specific paired FDP and FCP models are obtained based on different assumptions of fault introduction and correction effort. An illustrative example is presented. The optimal release policy under different criteria is also discussed

  10. Introduction to optimization analysis in hydrosystem engineering

    CERN Document Server

    Goodarzi, Ehsan; Hosseinipour, Edward Zia

    2014-01-01

    This book presents the basics of linear and nonlinear optimization analysis for both single and multi-objective problems in hydrosystem engineering.  The book includes several examples with various levels of complexity in different fields of water resources engineering. All examples are solved step by step to assist the reader and to make it easier to understand the concepts. In addition, the latest tools and methods are presented to help students, researchers, engineers and water managers to properly conceptualize and formulate resource allocation problems, and to deal with the complexity of constraints in water demand and available supplies in an appropriate way.

  11. Scale up, optimization and stability analysis of Curcumin C3 complex-loaded nanoparticles for cancer therapy

    Science.gov (United States)

    2012-01-01

    Background Nanoparticle based delivery of anticancer drugs have been widely investigated. However, a very important process for Research & Development in any pharmaceutical industry is scaling nanoparticle formulation techniques so as to produce large batches for preclinical and clinical trials. This process is not only critical but also difficult as it involves various formulation parameters to be modulated all in the same process. Methods In our present study, we formulated curcumin loaded poly (lactic acid-co-glycolic acid) nanoparticles (PLGA-CURC). This improved the bioavailability of curcumin, a potent natural anticancer drug, making it suitable for cancer therapy. Post formulation, we optimized our process by Reponse Surface Methodology (RSM) using Central Composite Design (CCD) and scaled up the formulation process in four stages with final scale-up process yielding 5 g of curcumin loaded nanoparticles within the laboratory setup. The nanoparticles formed after scale-up process were characterized for particle size, drug loading and encapsulation efficiency, surface morphology, in vitro release kinetics and pharmacokinetics. Stability analysis and gamma sterilization were also carried out. Results Results revealed that that process scale-up is being mastered for elaboration to 5 g level. The mean nanoparticle size of the scaled up batch was found to be 158.5 ± 9.8 nm and the drug loading was determined to be 10.32 ± 1.4%. The in vitro release study illustrated a slow sustained release corresponding to 75% drug over a period of 10 days. The pharmacokinetic profile of PLGA-CURC in rats following i.v. administration showed two compartmental model with the area under the curve (AUC0-∞) being 6.139 mg/L h. Gamma sterilization showed no significant change in the particle size or drug loading of the nanoparticles. Stability analysis revealed long term physiochemical stability of the PLGA-CURC formulation. Conclusions A successful effort towards

  12. Optimization of deformation monitoring networks using finite element strain analysis

    Science.gov (United States)

    Alizadeh-Khameneh, M. Amin; Eshagh, Mehdi; Jensen, Anna B. O.

    2018-04-01

    An optimal design of a geodetic network can fulfill the requested precision and reliability of the network, and decrease the expenses of its execution by removing unnecessary observations. The role of an optimal design is highlighted in deformation monitoring network due to the repeatability of these networks. The core design problem is how to define precision and reliability criteria. This paper proposes a solution, where the precision criterion is defined based on the precision of deformation parameters, i. e. precision of strain and differential rotations. A strain analysis can be performed to obtain some information about the possible deformation of a deformable object. In this study, we split an area into a number of three-dimensional finite elements with the help of the Delaunay triangulation and performed the strain analysis on each element. According to the obtained precision of deformation parameters in each element, the precision criterion of displacement detection at each network point is then determined. The developed criterion is implemented to optimize the observations from the Global Positioning System (GPS) in Skåne monitoring network in Sweden. The network was established in 1989 and straddled the Tornquist zone, which is one of the most active faults in southern Sweden. The numerical results show that 17 out of all 21 possible GPS baseline observations are sufficient to detect minimum 3 mm displacement at each network point.

  13. A method to generate fully multi-scale optimal interpolation by combining efficient single process analyses, illustrated by a DINEOF analysis spiced with a local optimal interpolation

    Directory of Open Access Journals (Sweden)

    J.-M. Beckers

    2014-10-01

    Full Text Available We present a method in which the optimal interpolation of multi-scale processes can be expanded into a succession of simpler interpolations. First, we prove how the optimal analysis of a superposition of two processes can be obtained by different mathematical formulations involving iterations and analysis focusing on a single process. From the different mathematical equivalent formulations, we then select the most efficient ones by analyzing the behavior of the different possibilities in a simple and well-controlled test case. The clear guidelines deduced from this experiment are then applied to a real situation in which we combine large-scale analysis of hourly Spinning Enhanced Visible and Infrared Imager (SEVIRI satellite images using data interpolating empirical orthogonal functions (DINEOF with a local optimal interpolation using a Gaussian covariance. It is shown that the optimal combination indeed provides the best reconstruction and can therefore be exploited to extract the maximum amount of useful information from the original data.

  14. Development and Application of a Tool for Optimizing Composite Matrix Viscoplastic Material Parameters

    Science.gov (United States)

    Murthy, Pappu L. N.; Naghipour Ghezeljeh, Paria; Bednarcyk, Brett A.

    2018-01-01

    This document describes a recently developed analysis tool that enhances the resident capabilities of the Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) and its application. MAC/GMC is a composite material and laminate analysis software package developed at NASA Glenn Research Center. The primary focus of the current effort is to provide a graphical user interface (GUI) capability that helps users optimize highly nonlinear viscoplastic constitutive law parameters by fitting experimentally observed/measured stress-strain responses under various thermo-mechanical conditions for braided composites. The tool has been developed utilizing the MATrix LABoratory (MATLAB) (The Mathworks, Inc., Natick, MA) programming language. Illustrative examples shown are for a specific braided composite system wherein the matrix viscoplastic behavior is represented by a constitutive law described by seven parameters. The tool is general enough to fit any number of experimentally observed stress-strain responses of the material. The number of parameters to be optimized, as well as the importance given to each stress-strain response, are user choice. Three different optimization algorithms are included: (1) Optimization based on gradient method, (2) Genetic algorithm (GA) based optimization and (3) Particle Swarm Optimization (PSO). The user can mix and match the three algorithms. For example, one can start optimization with either 2 or 3 and then use the optimized solution to further fine tune with approach 1. The secondary focus of this paper is to demonstrate the application of this tool to optimize/calibrate parameters for a nonlinear viscoplastic matrix to predict stress-strain curves (for constituent and composite levels) at different rates, temperatures and/or loading conditions utilizing the Generalized Method of Cells. After preliminary validation of the tool through comparison with experimental results, a detailed virtual parametric study is

  15. The system of dose limitation and its optimization requirement: Present status and future outlook

    International Nuclear Information System (INIS)

    Gonzalez, A.J.

    1984-01-01

    Optimization of radiation protection is a relevant and controversial requirement of the system of dose limitation currently recommended by the International Commission on Radiological Protection (ICRP). Since the first European Scientific Seminar on Experience and Methods on Optimization - held by the Commission of the European Communities in 1979 - and several related seminars and symposia organized by the IAEA, many international efforts have been made to promote the practical implementation of the requirement. Recently, the ICRP published a report of ICRP Committee 4 on cost-benefit analysis in the optimization of radiation protection (ICRP Publication 37); it provides guidance on the principles and methods of application of the requirement. Ultimately, this seminar demonstrates the continuous interest of the international community in the proper use of optimization. This paper is intended to contribute to the seminar's objective, discussing the current issues concerning the implementation of the requirement and exploring perspectives for future applications of the principles involved in optimization

  16. How long is enough to detect terrestrial animals? Estimating the minimum trapping effort on camera traps

    Directory of Open Access Journals (Sweden)

    Xingfeng Si

    2014-05-01

    Full Text Available Camera traps is an important wildlife inventory tool for estimating species diversity at a site. Knowing what minimum trapping effort is needed to detect target species is also important to designing efficient studies, considering both the number of camera locations, and survey length. Here, we take advantage of a two-year camera trapping dataset from a small (24-ha study plot in Gutianshan National Nature Reserve, eastern China to estimate the minimum trapping effort actually needed to sample the wildlife community. We also evaluated the relative value of adding new camera sites or running cameras for a longer period at one site. The full dataset includes 1727 independent photographs captured during 13,824 camera days, documenting 10 resident terrestrial species of birds and mammals. Our rarefaction analysis shows that a minimum of 931 camera days would be needed to detect the resident species sufficiently in the plot, and c. 8700 camera days to detect all 10 resident species. In terms of detecting a diversity of species, the optimal sampling period for one camera site was c. 40, or long enough to record about 20 independent photographs. Our analysis of evaluating the increasing number of additional camera sites shows that rotating cameras to new sites would be more efficient for measuring species richness than leaving cameras at fewer sites for a longer period.

  17. Advanced methods for the analysis, design, and optimization of SMA-based aerostructures

    International Nuclear Information System (INIS)

    Hartl, D J; Lagoudas, D C; Calkins, F T

    2011-01-01

    Engineers continue to apply shape memory alloys to aerospace actuation applications due to their high energy density, robust solid-state actuation, and silent and shock-free operation. Past design and development of such actuators relied on experimental trial and error and empirically derived graphical methods. Over the last two decades, however, it has been repeatedly demonstrated that existing SMA constitutive models can capture stabilized SMA transformation behaviors with sufficient accuracy. This work builds upon past successes and suggests a general framework by which predictive tools can be used to assess the responses of many possible design configurations in an automated fashion. By applying methods of design optimization, it is shown that the integrated implementation of appropriate analysis tools can guide engineers and designers to the best design configurations. A general design optimization framework is proposed for the consideration of any SMA component or assembly of such components that applies when the set of design variables includes many members. This is accomplished by relying on commercially available software and utilizing tools already well established in the design optimization community. Such tools are combined with finite element analysis (FEA) packages that consider a multitude of structural effects. The foundation of this work is a three-dimensional thermomechanical constitutive model for SMAs applicable for arbitrarily shaped bodies. A reduced-order implementation also allows computationally efficient analysis of structural components such as wires, rods, beams and shells. The use of multiple optimization schemes, the consideration of assembled components, and the accuracy of the implemented constitutive model in full and reduced-order forms are all demonstrated

  18. Optimal patient education for cancer pain: a systematic review and theory-based meta-analysis.

    Science.gov (United States)

    Marie, N; Luckett, T; Davidson, P M; Lovell, M; Lal, S

    2013-12-01

    Previous systematic reviews have found patient education to be moderately efficacious in decreasing the intensity of cancer pain, but variation in results warrants analysis aimed at identifying which strategies are optimal. A systematic review and meta-analysis was undertaken using a theory-based approach to classifying and comparing educational interventions for cancer pain. The reference lists of previous reviews and MEDLINE, PsycINFO, and CENTRAL were searched in May 2012. Studies had to be published in a peer-reviewed English language journal and compare the effect on cancer pain intensity of education with usual care. Meta-analyses used standardized effect sizes (ES) and a random effects model. Subgroup analyses compared intervention components categorized using the Michie et al. (Implement Sci 6:42, 2011) capability, opportunity, and motivation behavior (COM-B) model. Fifteen randomized controlled trials met the criteria. As expected, meta-analysis identified a small-moderate ES favoring education versus usual care (ES, 0.27 [-0.47, -0.07]; P = 0.007) with substantial heterogeneity (I² = 71 %). Subgroup analyses based on the taxonomy found that interventions using "enablement" were efficacious (ES, 0.35 [-0.63, -0.08]; P = 0.01), whereas those lacking this component were not (ES, 0.18 [-0.46, 0.10]; P = 0.20). However, the subgroup effect was nonsignificant (P = 0.39), and heterogeneity was not reduced. Factoring in the variable of individualized versus non-individualized influenced neither efficacy nor heterogeneity. The current meta-analysis follows a trend in using theory to understand the mechanisms of complex interventions. We suggest that future efforts focus on interventions that target patient self-efficacy. Authors are encouraged to report comprehensive details of interventions and methods to inform synthesis, replication, and refinement.

  19. Robotic disaster recovery efforts with ad-hoc deployable cloud computing

    Science.gov (United States)

    Straub, Jeremy; Marsh, Ronald; Mohammad, Atif F.

    2013-06-01

    Autonomous operations of search and rescue (SaR) robots is an ill posed problem, which is complexified by the dynamic disaster recovery environment. In a typical SaR response scenario, responder robots will require different levels of processing capabilities during various parts of the response effort and will need to utilize multiple algorithms. Placing these capabilities onboard the robot is a mediocre solution that precludes algorithm specific performance optimization and results in mediocre performance. Architecture for an ad-hoc, deployable cloud environment suitable for use in a disaster response scenario is presented. Under this model, each service provider is optimized for the task and maintains a database of situation-relevant information. This service-oriented architecture (SOA 3.0) compliant framework also serves as an example of the efficient use of SOA 3.0 in an actual cloud application.

  20. Limitations on the application of optimization methods in the design of radiation protection in large installations

    International Nuclear Information System (INIS)

    Hock, R.; Brauns, J.; Steinicke, P.

    1986-01-01

    In a society where prices of goods are not regulated, optimization is best achieved by competition and not by the decisions of an authority. In order to improve its competitive position, a company may attach increasing importance to cost-benefit analyses both internally and in its discussions with customers. Some limitations and problems of this methodology are analysed in the paper. It is concluded that an increase in design effort (analysis of more options) beyond a planned level, in order to reduce radiation exposure, can only be justified in rational terms if exposure limits are involved. An increase in design effort could also be justified if solutions with lower equipment and operating costs but higher radiation exposure were acceptable. Because of the high competitive value of radiation protection, however, it is difficult to gain acceptance for such optimization. The cost of the investigation itself requires optimal procedures for the optimization process and therefore limitation of the number of options to be analysed. This is demonstrated for the example of a shielding wall. Another problem is the probabilistic nature of many of the parameters involved. In most cases this probability distribution is only inaccurately known. Deterministic 'design basis assumptions' therefore have to be made. The choice of these assumptions may greatly influence the result of the optimization, as demonstrated in an example taken from practice. (author)

  1. Analysis and topology optimization design of high-speed driving spindle

    Science.gov (United States)

    Wang, Zhilin; Yang, Hai

    2018-04-01

    The three-dimensional model of high-speed driving spindle is established by using SOLIDWORKS. The model is imported through the interface of ABAQUS, A finite element analysis model of high-speed driving spindle was established by using spring element to simulate bearing boundary condition. High-speed driving spindle for the static analysis, the spindle of the stress, strain and displacement nephogram, and on the basis of the results of the analysis on spindle for topology optimization, completed the lightweight design of high-speed driving spindle. The design scheme provides guidance for the design of axial parts of similar structures.

  2. A review of inexact optimization modeling and its application to integrated water resources management

    Science.gov (United States)

    Wang, Ran; Li, Yin; Tan, Qian

    2015-03-01

    Water is crucial in supporting people's daily life and the continual quest for socio-economic development. It is also a fundamental resource for ecosystems. Due to the associated complexities and uncertainties, as well as intensive competition over limited water resources between human beings and ecosystems, decision makers are facing increased pressure to respond effectively to various water-related issues and conflicts from an integrated point of view. This quandary requires a focused effort to resolve a wide range of issues related to water resources, as well as the associated economic and environmental implications. Effective systems analysis approaches under uncertainty that successfully address interactions, complexities, uncertainties, and changing conditions associated with water resources, human activities, and ecological conditions are desired, which requires a systematic investigation of the previous studies in relevant areas. Systems analysis and optimization modeling for integrated water resources management under uncertainty is thus comprehensively reviewed in this paper. A number of related methodologies and applications related to stochastic, fuzzy, and interval mathematical optimization modeling are examined. Then, their applications to integrated water resources management are presented. Perspectives of effective management schemes are investigated, demonstrating many demanding areas for enhanced research efforts, which include issues of data availability and reliability, concerns over uncertainty, necessity of post-modeling analysis, and the usefulness of the development of simulation techniques.

  3. Layout Optimization of Structures with Finite-size Features using Multiresolution Analysis

    DEFF Research Database (Denmark)

    Chellappa, S.; Diaz, A. R.; Bendsøe, Martin P.

    2004-01-01

    A scheme for layout optimization in structures with multiple finite-sized heterogeneities is presented. Multiresolution analysis is used to compute reduced operators (stiffness matrices) representing the elastic behavior of material distributions with heterogeneities of sizes that are comparable...

  4. Optimal Dynamic Advertising Strategy Under Age-Specific Market Segmentation

    Science.gov (United States)

    Krastev, Vladimir

    2011-12-01

    We consider the model proposed by Faggian and Grosset for determining the advertising efforts and goodwill in the long run of a company under age segmentation of consumers. Reducing this model to optimal control sub problems we find the optimal advertising strategy and goodwill.

  5. A Clustering Based Approach for Observability and Controllability Analysis for Optimal Placement of PMU

    Science.gov (United States)

    Murthy, Ch; MIEEE; Mohanta, D. K.; SMIEE; Meher, Mahendra

    2017-08-01

    Continuous monitoring and control of the power system is essential for its healthy operation. This can be achieved by making the system observable as well as controllable. Many efforts have been made by several researchers to make the system observable by placing the Phasor Measurement Units (PMUs) at the optimal locations. But so far the idea of controllability with PMUs is not considered. This paper contributes how to check whether the system is controllable or not, if not then how make it controllable using a clustering approach. IEEE 14 bus system is considered to illustrate the concept of controllability.

  6. Comparison of cardiovascular response to combined static-dynamic effort, postprandial dynamic effort and dynamic effort alone in patients with chronic ischemic heart disease

    International Nuclear Information System (INIS)

    Hung, J.; McKillip, J.; Savin, W.; Magder, S.; Kraus, R.; Houston, N.; Goris, M.; Haskell, W.; DeBusk, R.

    1982-01-01

    The cardiovascular responses to combined static-dynamic effort, postprandial dynamic effort and dynamic effort alone were evaluated by upright bicycle ergometry during equilibrium-gated blood pool scintigraphy in 24 men, mean age 59 +/- 8 years, with chronic ischemic heart disease. Combined static-dynamic effort and the postprandial state elicited a peak cardiovascular response similar to that of dynamic effort alone. Heart rate, intraarterial systolic and diastolic pressures, rate-pressure product and ejection fraction were similar for the three test conditions at the onset of ischemia and at peak effort. The prevalence and extent of exercise-induced ischemic left ventricular dysfunction, ST-segment depression, angina pectoris and ventricular ectopic activity were also similar during the three test conditions. Direct and indirect measurements of systolic and diastolic blood pressure were highly correlated. The onset of ischemic ST-segment depression and angina pectoris correlated as strongly with heart rate alone as with the rate-pressure product during all three test conditions. The cardiovascular response to combined static-dynamic effort and to postprandial dynamic effort becomes more similar to that of dynamic effort alone as dynamic effort reaches a symptom limit. If significant ischemic and arrhythmic abnormalities are absent during symptom-limited dynamic exercise testing, they are unlikely to appear during combined static-dynamic or postprandial dynamic effort

  7. Optimal Spatial Harvesting Strategy and Symmetry-Breaking

    International Nuclear Information System (INIS)

    Kurata, Kazuhiro; Shi Junping

    2008-01-01

    A reaction-diffusion model with logistic growth and constant effort harvesting is considered. By minimizing an intrinsic biological energy function, we obtain an optimal spatial harvesting strategy which will benefit the population the most. The symmetry properties of the optimal strategy are also discussed, and related symmetry preserving and symmetry breaking phenomena are shown with several typical examples of habitats

  8. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    Science.gov (United States)

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.

    1988-01-01

    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  9. Adaptive extremal optimization by detrended fluctuation analysis

    International Nuclear Information System (INIS)

    Hamacher, K.

    2007-01-01

    Global optimization is one of the key challenges in computational physics as several problems, e.g. protein structure prediction, the low-energy landscape of atomic clusters, detection of community structures in networks, or model-parameter fitting can be formulated as global optimization problems. Extremal optimization (EO) has become in recent years one particular, successful approach to the global optimization problem. As with almost all other global optimization approaches, EO is driven by an internal dynamics that depends crucially on one or more parameters. Recently, the existence of an optimal scheme for this internal parameter of EO was proven, so as to maximize the performance of the algorithm. However, this proof was not constructive, that is, one cannot use it to deduce the optimal parameter itself a priori. In this study we analyze the dynamics of EO for a test problem (spin glasses). Based on the results we propose an online measure of the performance of EO and a way to use this insight to reformulate the EO algorithm in order to construct optimal values of the internal parameter online without any input by the user. This approach will ultimately allow us to make EO parameter free and thus its application in general global optimization problems much more efficient

  10. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design-Part I. Model development

    Energy Technology Data Exchange (ETDEWEB)

    He, L., E-mail: li.he@ryerson.ca [Department of Civil Engineering, Faculty of Engineering, Architecture and Science, Ryerson University, 350 Victoria Street, Toronto, Ontario, M5B 2K3 (Canada); Huang, G.H. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada); College of Urban Environmental Sciences, Peking University, Beijing 100871 (China); Lu, H.W. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada)

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the 'true' ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.

  11. Optimal Aide Security Information Search (OASIS)

    National Research Council Canada - National Science Library

    Kapadia, Chetna

    2005-01-01

    The purpose of the Optimal AIDE Security Information Search (OASIS) effort was to investigate and prototype a tool that can assist the network security analyst in collecting useful information to defend the networks they manage...

  12. Purchasing and inventory management techniques for optimizing inventory investment

    International Nuclear Information System (INIS)

    McFarlane, I.; Gehshan, T.

    1993-01-01

    In an effort to reduce operations and maintenance costs among nuclear plants, many utilities are taking a closer look at their inventory investment. Various approaches for inventory reduction have been used and discussed, but these approaches are often limited to an inventory management perspective. Interaction with purchasing and planning personnel to reduce inventory investment is a necessity in utility efforts to become more cost competitive. This paper addresses the activities that purchasing and inventory management personnel should conduct in an effort to optimize inventory investment while maintaining service-level goals. Other functions within a materials management organization, such as the warehousing and investment recovery functions, can contribute to optimizing inventory investment. However, these are not addressed in this paper because their contributions often come after inventory management and purchasing decisions have been made

  13. The role of principal in optimizing school climate in primary schools

    Science.gov (United States)

    Murtedjo; Suharningsih

    2018-01-01

    This article was written based on the occurrence of elementary school changes that never counted because of the low quality, became the school of choice of the surrounding community with the many national achievements ever achieved. This article is based on research data conducted in primary schools. In this paper focused on the role of school principals in an effort to optimize school climate. To describe the principal’s role in optimizing school climate using a qualitative approach to the design of Multi-Site Study. The appointment of the informant was done by snowball technique. Data collection through in-depth interviews, participant observation, and documentation. Data credibility checking uses triangulation techniques, member checks, and peer discussions. Auditability is performed by the auditor. The collected data is analyzed by site analysis and cross-site analysis. The result of the research shows that the principal in optimizing the conducive school climate by creating the physical condition of the school and the socio-emotional condition is pleasant, so that the teachers in implementing the learning process become passionate, happy learners which ultimately improve their learning achievement and can improve the school quality.

  14. The optimized baseline project: Reinventing environmental restoration at Hanford

    International Nuclear Information System (INIS)

    Goodenough, J.D.; Janaskie, M.T.; Kleinen, P.J.

    1994-01-01

    The U.S. Department of Energy Richland Operations Office (DOE-RL) is using a strategic planning effort (termed the Optimized Baseline Project) to develop a new approach to the Hanford Environmental Restoration program. This effort seeks to achieve a quantum leap improvement in performance through results oriented prioritization of activities. This effort was conducted in parallel with the renegotiation of the Tri-Party Agreement and provided DOE with an opportunity to propose innovative initiatives to promote cost effectiveness, accelerate progress in the Hanford Environmental Restoration Program and involve stakeholders in the decision-making process. The Optimized Baseline project is an innovative approach to program planning and decision-making in several respects. First, the process is a top down, value driven effort that responds to values held by DOE, the regulatory community and the public. Second, planning is conducted in a way that reinforces the technical management process at Richland, involves the regulatory community in substantive decisions, and includes the public. Third, the Optimized Baseline Project is being conducted as part of a sitewide Hanford initiative to reinvent Government. The planning process used for the Optimized Baseline Project has many potential applications at other sites and in other programs where there is a need to build consensus among diverse, independent groups of stakeholders and decisionmakers. The project has successfully developed and demonstrated an innovative approach to program planning that accelerates the pace of cleanup, involves the regulators as partners with DOE in priority setting, and builds public understanding and support for the program through meaningful opportunities for involvement

  15. Production and efficiency of large wildland fire suppression effort: A stochastic frontier analysis.

    Science.gov (United States)

    Katuwal, Hari; Calkin, David E; Hand, Michael S

    2016-01-15

    This study examines the production and efficiency of wildland fire suppression effort. We estimate the effectiveness of suppression resource inputs to produce controlled fire lines that contain large wildland fires using stochastic frontier analysis. Determinants of inefficiency are identified and the effects of these determinants on the daily production of controlled fire line are examined. Results indicate that the use of bulldozers and fire engines increase the production of controlled fire line, while firefighter crews do not tend to contribute to controlled fire line production. Production of controlled fire line is more efficient if it occurs along natural or built breaks, such as rivers and roads, and within areas previously burned by wildfires. However, results also indicate that productivity and efficiency of the controlled fire line are sensitive to weather, landscape and fire characteristics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. An enhanced unified uncertainty analysis approach based on first order reliability method with single-level optimization

    International Nuclear Information System (INIS)

    Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; Tooren, Michel van

    2013-01-01

    In engineering, there exist both aleatory uncertainties due to the inherent variation of the physical system and its operational environment, and epistemic uncertainties due to lack of knowledge and which can be reduced with the collection of more data. To analyze the uncertain distribution of the system performance under both aleatory and epistemic uncertainties, combined probability and evidence theory can be employed to quantify the compound effects of the mixed uncertainties. The existing First Order Reliability Method (FORM) based Unified Uncertainty Analysis (UUA) approach nests the optimization based interval analysis in the improved Hasofer–Lind–Rackwitz–Fiessler (iHLRF) algorithm based Most Probable Point (MPP) searching procedure, which is computationally inhibitive for complex systems and may encounter convergence problem as well. Therefore, in this paper it is proposed to use general optimization solvers to search MPP in the outer loop and then reformulate the double-loop optimization problem into an equivalent single-level optimization (SLO) problem, so as to simplify the uncertainty analysis process, improve the robustness of the algorithm, and alleviate the computational complexity. The effectiveness and efficiency of the proposed method is demonstrated with two numerical examples and one practical satellite conceptual design problem. -- Highlights: ► Uncertainty analysis under mixed aleatory and epistemic uncertainties is studied. ► A unified uncertainty analysis method is proposed with combined probability and evidence theory. ► The traditional nested analysis method is converted to single level optimization for efficiency. ► The effectiveness and efficiency of the proposed method are testified with three examples

  17. Digital image analysis

    DEFF Research Database (Denmark)

    Riber-Hansen, Rikke; Vainer, Ben; Steiniche, Torben

    2012-01-01

    Digital image analysis (DIA) is increasingly implemented in histopathological research to facilitate truly quantitative measurements, decrease inter-observer variation and reduce hands-on time. Originally, efforts were made to enable DIA to reproduce manually obtained results on histological slides...... reproducibility, application of stereology-based quantitative measurements, time consumption, optimization of histological slides, regions of interest selection and recent developments in staining and imaging techniques....

  18. The Relationship between High Flow Nasal Cannula Flow Rate and Effort of Breathing in Children.

    Science.gov (United States)

    Weiler, Thomas; Kamerkar, Asavari; Hotz, Justin; Ross, Patrick A; Newth, Christopher J L; Khemani, Robinder G

    2017-10-01

    To use an objective metric of effort of breathing to determine optimal high flow nasal cannula (HFNC) flow rates in children flow rates of 0.5, 1.0, 1.5, and 2.0 L/kg/minute. For a subgroup of patients, 2 different HFNC delivery systems (Fisher & Paykel [Auckland, New Zealand] and Vapotherm [Exeter, New Hampshire]) were compared. Twenty-one patients (49 titration episodes) were studied. The most common diagnoses were bronchiolitis and pneumonia. Overall, there was a significant difference in the percent change in PRP from baseline (of 0.5 L/kg/minute) with increasing flow rates for the entire cohort (P flow rates were increased (P = .001) than patients >8 kg. The optimal HFNC flow rate to reduce effort of breathing in infants and young children is approximately 1.5-2.0 L/kg/minute with more benefit seen in children ≤8 kg. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Structural Optimization based on the Concept of First Order Analysis

    International Nuclear Information System (INIS)

    Shinji, Nishiwaki; Hidekazu, Nishigaki; Yasuaki, Tsurumi; Yoshio, Kojima; Noboru, Kikuchi

    2002-01-01

    Computer Aided Engineering (CAE) has been successfully utilized in mechanical industries such as the automotive industry. It is, however, difficult for most mechanical design engineers to directly use CAE due to the sophisticated nature of the operations involved. In order to mitigate this problem, a new type of CAE, First Order Analysis (FOA) has been proposed. This paper presents the outcome of research concerning the development of a structural topology optimization methodology within FOA. This optimization method is constructed based on discrete and function-oriented elements such as beam and panel elements, and sequential convex programming. In addition, examples are provided to show the utility of the methodology presented here for mechanical design engineers

  20. Optimization modeling with spreadsheets

    CERN Document Server

    Baker, Kenneth R

    2015-01-01

    An accessible introduction to optimization analysis using spreadsheets Updated and revised, Optimization Modeling with Spreadsheets, Third Edition emphasizes model building skills in optimization analysis. By emphasizing both spreadsheet modeling and optimization tools in the freely available Microsoft® Office Excel® Solver, the book illustrates how to find solutions to real-world optimization problems without needing additional specialized software. The Third Edition includes many practical applications of optimization models as well as a systematic framework that il

  1. Analyzing and Predicting Effort Associated with Finding and Fixing Software Faults

    Science.gov (United States)

    Hamill, Maggie; Goseva-Popstojanova, Katerina

    2016-01-01

    Context: Software developers spend a significant amount of time fixing faults. However, not many papers have addressed the actual effort needed to fix software faults. Objective: The objective of this paper is twofold: (1) analysis of the effort needed to fix software faults and how it was affected by several factors and (2) prediction of the level of fix implementation effort based on the information provided in software change requests. Method: The work is based on data related to 1200 failures, extracted from the change tracking system of a large NASA mission. The analysis includes descriptive and inferential statistics. Predictions are made using three supervised machine learning algorithms and three sampling techniques aimed at addressing the imbalanced data problem. Results: Our results show that (1) 83% of the total fix implementation effort was associated with only 20% of failures. (2) Both safety critical failures and post-release failures required three times more effort to fix compared to non-critical and pre-release counterparts, respectively. (3) Failures with fixes spread across multiple components or across multiple types of software artifacts required more effort. The spread across artifacts was more costly than spread across components. (4) Surprisingly, some types of faults associated with later life-cycle activities did not require significant effort. (5) The level of fix implementation effort was predicted with 73% overall accuracy using the original, imbalanced data. Using oversampling techniques improved the overall accuracy up to 77%. More importantly, oversampling significantly improved the prediction of the high level effort, from 31% to around 85%. Conclusions: This paper shows the importance of tying software failures to changes made to fix all associated faults, in one or more software components and/or in one or more software artifacts, and the benefit of studying how the spread of faults and other factors affect the fix implementation

  2. Estimation of inspection effort

    International Nuclear Information System (INIS)

    Mullen, M.F.; Wincek, M.A.

    1979-06-01

    An overview of IAEA inspection activities is presented, and the problem of evaluating the effectiveness of an inspection is discussed. Two models are described - an effort model and an effectiveness model. The effort model breaks the IAEA's inspection effort into components; the amount of effort required for each component is estimated; and the total effort is determined by summing the effort for each component. The effectiveness model quantifies the effectiveness of inspections in terms of probabilities of detection and quantities of material to be detected, if diverted over a specific period. The method is applied to a 200 metric ton per year low-enriched uranium fuel fabrication facility. A description of the model plant is presented, a safeguards approach is outlined, and sampling plans are calculated. The required inspection effort is estimated and the results are compared to IAEA estimates. Some other applications of the method are discussed briefly. Examples are presented which demonstrate how the method might be useful in formulating guidelines for inspection planning and in establishing technical criteria for safeguards implementation

  3. Strategy of arm movement control is determined by minimization of neural effort for joint coordination.

    Science.gov (United States)

    Dounskaia, Natalia; Shimansky, Yury

    2016-06-01

    Optimality criteria underlying organization of arm movements are often validated by testing their ability to adequately predict hand trajectories. However, kinematic redundancy of the arm allows production of the same hand trajectory through different joint coordination patterns. We therefore consider movement optimality at the level of joint coordination patterns. A review of studies of multi-joint movement control suggests that a 'trailing' pattern of joint control is consistently observed during which a single ('leading') joint is rotated actively and interaction torque produced by this joint is the primary contributor to the motion of the other ('trailing') joints. A tendency to use the trailing pattern whenever the kinematic redundancy is sufficient and increased utilization of this pattern during skillful movements suggests optimality of the trailing pattern. The goal of this study is to determine the cost function minimization of which predicts the trailing pattern. We show that extensive experimental testing of many known cost functions cannot successfully explain optimality of the trailing pattern. We therefore propose a novel cost function that represents neural effort for joint coordination. That effort is quantified as the cost of neural information processing required for joint coordination. We show that a tendency to reduce this 'neurocomputational' cost predicts the trailing pattern and that the theoretically developed predictions fully agree with the experimental findings on control of multi-joint movements. Implications for future research of the suggested interpretation of the trailing joint control pattern and the theory of joint coordination underlying it are discussed.

  4. Compensatory Analysis and Optimization for MADM for Heterogeneous Wireless Network Selection

    Directory of Open Access Journals (Sweden)

    Jian Zhou

    2016-01-01

    Full Text Available In the next-generation heterogeneous wireless networks, a mobile terminal with a multi-interface may have network access from different service providers using various technologies. In spite of this heterogeneity, seamless intersystem mobility is a mandatory requirement. One of the major challenges for seamless mobility is the creation of a network selection scheme, which is for users that select an optimal network with best comprehensive performance between different types of networks. However, the optimal network may be not the most reasonable one due to compensation of MADM (Multiple Attribute Decision Making, and the network is called pseudo-optimal network. This paper conducts a performance evaluation of a number of widely used MADM-based methods for network selection that aim to keep the mobile users always best connected anywhere and anytime, where subjective weight and objective weight are all considered. The performance analysis shows that the selection scheme based on MEW (weighted multiplicative method and combination weight can better avoid accessing pseudo-optimal network for balancing network load and reducing ping-pong effect in comparison with three other MADM solutions.

  5. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jakeman, John Davis [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stephens, John Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vigil, Dena M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wildey, Timothy Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bohnhoff, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hu, Kenneth T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dalbey, Keith R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bauman, Lara E [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hough, Patricia Diane [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-05-01

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  6. Shape Optimization of Impeller Blades for 15,000 HP Centrifugal Compressor Using Fluid Structural Interaction Analysis

    International Nuclear Information System (INIS)

    Kang, Hyun Su; Oh, Jeongsu; Han, Jeong Sam

    2014-01-01

    This paper discusses a one-way fluid structural interaction (FSI) analysis and shape optimization of the impeller blades for a 15,000 HP centrifugal compressor using the response surface method (RSM). Because both the aerodynamic performance and the structural safety of the impeller are affected by the shape of its blades, shape optimization is necessary using the FSI analysis, which includes a structural analysis for the induced fluid pressure and centrifugal force. The FSI analysis is performed in ANSYS Workbench: ANSYS CFX is used for the flow field and ANSYS Mechanical is used for the structural field. The response surfaces for the FSI results (efficiency, pressure ratio, maximum stress, etc.) generated based on the design of experiments (DOE) are used to find an optimal shape for the impeller blades, which provides the maximum aerodynamic performance subject to the structural safety constraints

  7. Shape Optimization of Impeller Blades for 15,000 HP Centrifugal Compressor Using Fluid Structural Interaction Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hyun Su [Sungkyunkwan University, Suwon (Korea, Republic of); Oh, Jeongsu [Daejoo Machinery Co., Daegu (Korea, Republic of); Han, Jeong Sam [Andong National University, Andong (Korea, Republic of)

    2014-06-15

    This paper discusses a one-way fluid structural interaction (FSI) analysis and shape optimization of the impeller blades for a 15,000 HP centrifugal compressor using the response surface method (RSM). Because both the aerodynamic performance and the structural safety of the impeller are affected by the shape of its blades, shape optimization is necessary using the FSI analysis, which includes a structural analysis for the induced fluid pressure and centrifugal force. The FSI analysis is performed in ANSYS Workbench: ANSYS CFX is used for the flow field and ANSYS Mechanical is used for the structural field. The response surfaces for the FSI results (efficiency, pressure ratio, maximum stress, etc.) generated based on the design of experiments (DOE) are used to find an optimal shape for the impeller blades, which provides the maximum aerodynamic performance subject to the structural safety constraints.

  8. Design analysis for optimal calibration of diffusivity in reactive multilayers

    KAUST Repository

    Vohra, Manav

    2017-05-29

    Calibration of the uncertain Arrhenius diffusion parameters for quantifying mixing rates in Zr–Al nanolaminate foils have been previously performed in a Bayesian setting [M. Vohra, J. Winokur, K.R. Overdeep, P. Marcello, T.P. Weihs, and O.M. Knio, Development of a reduced model of formation reactions in Zr–Al nanolaminates, J. Appl. Phys. 116(23) (2014): Article No. 233501]. The parameters were inferred in a low-temperature, homogeneous ignition regime, and a high-temperature self-propagating reaction regime. In this work, we extend the analysis to determine optimal experimental designs that would provide the best data for inference. We employ a rigorous framework that quantifies the expected information gain in an experiment, and find the optimal design conditions using Monte Carlo techniques, sparse quadrature, and polynomial chaos surrogates. For the low-temperature regime, we find the optimal foil heating rate and pulse duration, and confirm through simulation that the optimal design indeed leads to sharp posterior distributions of the diffusion parameters. For the high-temperature regime, we demonstrate the potential for increasing the expected information gain concerning the posteriors by increasing the sample size and reducing the uncertainty in measurements. Moreover, posterior marginals are also obtained to verify favourable experimental scenarios.

  9. Coupled adjoint aerostructural wing optimization using quasi-three-dimensional aerodynamic analysis

    NARCIS (Netherlands)

    Elham, A.; van Tooren, M.J.L.

    2016-01-01

    This paper presents a method for wing aerostructural analysis and optimization, which needs much lower computational costs, while computes the wing drag and structural deformation with a level of accuracy comparable to the higher fidelity CFD and FEM tools. A quasi-threedimensional aerodynamic

  10. Optimal organization of structural analysis and site inspection for the seismic requalification of the nuclear power plant of Paks, Hungary

    International Nuclear Information System (INIS)

    Fregonese, R.

    1995-01-01

    The analysis described in this report deals with a numerical procedure aimed at the assessment of a methodology for the optimal organization of data collection, in a contest of seismic requalification of structures and components of existing nuclear power stations (NPPs). The activity has been carried out in the frame of IAEA benchmark study for the seismic analysis of existing Nuclear Power Plants. This study starts from the assumption that seismic qualification of existing NPPs usually has to be carried out even in lack of sufficient data on structural behaviour and site conditions. In this framework, the organization of the analysis possibly requires a special approach, based on reliability analysis, able to give the distributions of dependent structural variables. This result can in fact be used in iterative updating of the analysis, leading at last at a required uncertainty target level for the structural evaluation. Therefore, the global uncertainty can be reduced by the reduction of the uncertainties of the variables that affect most the structural behaviour: the proposed procedure is able to drive this process in an optimal way. The analysis manager can therefore organize additional experimental inspections (for example in geotechnics, geophysics, structural behaviour)and data collections with the confidence of a minimum effort required for the prescribed target in terms of seismic safety. The procedure presented in this report has quite a general application following the general description is provided; therefore the example test has been chosen for the Paks NPP in Hungary, where a seismic requalification is in progress. To this aim, in the following specific reference will be made to the variables of interest for the on going job, namely: the probability distribution of some structural parameters, such as acceleration or shear force in critical points, giving a global overview on the reliability of structural calculations; the sensitivity coefficient

  11. Adjoint Parameter Sensitivity Analysis for the Hydrodynamic Lattice Boltzmann Method with Applications to Design Optimization

    DEFF Research Database (Denmark)

    Pingen, Georg; Evgrafov, Anton; Maute, Kurt

    2009-01-01

    We present an adjoint parameter sensitivity analysis formulation and solution strategy for the lattice Boltzmann method (LBM). The focus is on design optimization applications, in particular topology optimization. The lattice Boltzmann method is briefly described with an in-depth discussion...

  12. On the shape optimization of flapping wings and their performance analysis

    KAUST Repository

    Ghommem, Mehdi

    2014-01-01

    The present work is concerned with the shape optimization of flapping wings in forward flight. The analysis is performed by combining a gradient-based optimizer with the unsteady vortex lattice method (UVLM). We describe the UVLM simulation procedure and provide the first methodology to select properly the mesh and time-step sizes to achieve invariant UVLM simulation results under mesh refinement. Our objective is to identify a set of optimized shapes that maximize the propulsive efficiency, defined as the ratio of the propulsive power over the aerodynamic power, under lift, thrust, and area constraints. Several parameters affecting flight performance are investigated and their impact is described. These include the wingÊ1/4s aspect ratio, camber line, and curvature of the leading and trailing edges. This study provides guidance for shape design of engineered flying systems. © 2013 Elsevier Masson SAS.

  13. Development and validation of automatic tools for interactive recurrence analysis in radiation therapy: optimization of treatment algorithms for locally advanced pancreatic cancer.

    Science.gov (United States)

    Kessel, Kerstin A; Habermehl, Daniel; Jäger, Andreas; Floca, Ralf O; Zhang, Lanlan; Bendl, Rolf; Debus, Jürgen; Combs, Stephanie E

    2013-06-07

    In radiation oncology recurrence analysis is an important part in the evaluation process and clinical quality assurance of treatment concepts. With the example of 9 patients with locally advanced pancreatic cancer we developed and validated interactive analysis tools to support the evaluation workflow. After an automatic registration of the radiation planning CTs with the follow-up images, the recurrence volumes are segmented manually. Based on these volumes the DVH (dose volume histogram) statistic is calculated, followed by the determination of the dose applied to the region of recurrence and the distance between the boost and recurrence volume. We calculated the percentage of the recurrence volume within the 80%-isodose volume and compared it to the location of the recurrence within the boost volume, boost + 1 cm, boost + 1.5 cm and boost + 2 cm volumes. Recurrence analysis of 9 patients demonstrated that all recurrences except one occurred within the defined GTV/boost volume; one recurrence developed beyond the field border/outfield. With the defined distance volumes in relation to the recurrences, we could show that 7 recurrent lesions were within the 2 cm radius of the primary tumor. Two large recurrences extended beyond the 2 cm, however, this might be due to very rapid growth and/or late detection of the tumor progression. The main goal of using automatic analysis tools is to reduce time and effort conducting clinical analyses. We showed a first approach and use of a semi-automated workflow for recurrence analysis, which will be continuously optimized. In conclusion, despite the limitations of the automatic calculations we contributed to in-house optimization of subsequent study concepts based on an improved and validated target volume definition.

  14. Design, Analysis and Optimization of a Solar Dish/Stirling System

    Directory of Open Access Journals (Sweden)

    Seyyed Danial Nazemi

    2016-02-01

    scenario of the optimization, while the system variables changed slightly. Article History: Received Sept 28, 2015; Received in revised form January 08, 2016; Accepted February 15, 2016; Available online How to Cite This Article: Nazemi, S. D. and Boroushaki, M. (2016 Design, Analysis and Optimization of a Solar Dish/Stirling System. Int. Journal of Renewable Energy Development, 5(1, 33-42. http://dx.doi.org/10.14710/ijred.5.1.33-42 

  15. Linking effort and fishing mortality in a mixed fisheries model

    DEFF Research Database (Denmark)

    Thøgersen, Thomas Talund; Hoff, Ayoe; Frost, Hans Staby

    2012-01-01

    in fish stocks has led to overcapacity in many fisheries, leading to incentives for overfishing. Recent research has shown that the allocation of effort among fleets can play an important role in mitigating overfishing when the targeting covers a range of species (multi-species—i.e., so-called mixed...... fisheries), while simultaneously optimising the overall economic performance of the fleets. The so-called FcubEcon model, in particular, has elucidated both the biologically and economically optimal method for allocating catches—and thus effort—between fishing fleets, while ensuring that the quotas...

  16. A community effort to protect genomic data sharing, collaboration and outsourcing.

    Science.gov (United States)

    Wang, Shuang; Jiang, Xiaoqian; Tang, Haixu; Wang, Xiaofeng; Bu, Diyue; Carey, Knox; Dyke, Stephanie Om; Fox, Dov; Jiang, Chao; Lauter, Kristin; Malin, Bradley; Sofia, Heidi; Telenti, Amalio; Wang, Lei; Wang, Wenhao; Ohno-Machado, Lucila

    2017-01-01

    The human genome can reveal sensitive information and is potentially re-identifiable, which raises privacy and security concerns about sharing such data on wide scales. In 2016, we organized the third Critical Assessment of Data Privacy and Protection competition as a community effort to bring together biomedical informaticists, computer privacy and security researchers, and scholars in ethical, legal, and social implications (ELSI) to assess the latest advances on privacy-preserving techniques for protecting human genomic data. Teams were asked to develop novel protection methods for emerging genome privacy challenges in three scenarios: Track (1) data sharing through the Beacon service of the Global Alliance for Genomics and Health. Track (2) collaborative discovery of similar genomes between two institutions; and Track (3) data outsourcing to public cloud services. The latter two tracks represent continuing themes from our 2015 competition, while the former was new and a response to a recently established vulnerability. The winning strategy for Track 1 mitigated the privacy risk by hiding approximately 11% of the variation in the database while permitting around 160,000 queries, a significant improvement over the baseline. The winning strategies in Tracks 2 and 3 showed significant progress over the previous competition by achieving multiple orders of magnitude performance improvement in terms of computational runtime and memory requirements. The outcomes suggest that applying highly optimized privacy-preserving and secure computation techniques to safeguard genomic data sharing and analysis is useful. However, the results also indicate that further efforts are needed to refine these techniques into practical solutions.

  17. On using priced timed automata to achieve optimal scheduling

    DEFF Research Database (Denmark)

    Rasmussen, Jacob Illum; Larsen, Kim Guldstrand; Subramani, K.

    2006-01-01

    This contribution reports on the considerable effort made recently towards extending and applying well-established timed automata technology to optimal scheduling and planning problems. The effort of the authors in this direction has to a large extent been carried out as part of the European proj...... of so-called priced timed automata....

  18. The economics of project analysis: Optimal investment criteria and methods of study

    Science.gov (United States)

    Scriven, M. C.

    1979-01-01

    Insight is provided toward the development of an optimal program for investment analysis of project proposals offering commercial potential and its components. This involves a critique of economic investment criteria viewed in relation to requirements of engineering economy analysis. An outline for a systems approach to project analysis is given Application of the Leontief input-output methodology to analysis of projects involving multiple processes and products is investigated. Effective application of elements of neoclassical economic theory to investment analysis of project components is demonstrated. Patterns of both static and dynamic activity levels are incorporated.

  19. Polynomial optimization : Error analysis and applications

    NARCIS (Netherlands)

    Sun, Zhao

    2015-01-01

    Polynomial optimization is the problem of minimizing a polynomial function subject to polynomial inequality constraints. In this thesis we investigate several hierarchies of relaxations for polynomial optimization problems. Our main interest lies in understanding their performance, in particular how

  20. Effortful echolalia.

    Science.gov (United States)

    Hadano, K; Nakamura, H; Hamanaka, T

    1998-02-01

    We report three cases of effortful echolalia in patients with cerebral infarction. The clinical picture of speech disturbance is associated with Type 1 Transcortical Motor Aphasia (TCMA, Goldstein, 1915). The patients always spoke nonfluently with loss of speech initiative, dysarthria, dysprosody, agrammatism, and increased effort and were unable to repeat sentences longer than those containing four or six words. In conversation, they first repeated a few words spoken to them, and then produced self initiated speech. The initial repetition as well as the subsequent self initiated speech, which were realized equally laboriously, can be regarded as mitigated echolalia (Pick, 1924). They were always aware of their own echolalia and tried to control it without effect. These cases demonstrate that neither the ability to repeat nor fluent speech are always necessary for echolalia. The possibility that a lesion in the left medial frontal lobe, including the supplementary motor area, plays an important role in effortful echolalia is discussed.

  1. Off-road vehicle dynamics analysis, modelling and optimization

    CERN Document Server

    Taghavifar, Hamid

    2017-01-01

    This book deals with the analysis of off-road vehicle dynamics from kinetics and kinematics perspectives and the performance of vehicle traversing over rough and irregular terrain. The authors consider the wheel performance, soil-tire interactions and their interface, tractive performance of the vehicle, ride comfort, stability over maneuvering, transient and steady state conditions of the vehicle traversing, modeling the aforementioned aspects and optimization from energetic and vehicle mobility perspectives. This book brings novel figures for the transient dynamics and original wheel terrain dynamics at on-the-go condition.

  2. Sensitivity analysis and design optimization through automatic differentiation

    International Nuclear Information System (INIS)

    Hovland, Paul D; Norris, Boyana; Strout, Michelle Mills; Bhowmick, Sanjukta; Utke, Jean

    2005-01-01

    Automatic differentiation is a technique for transforming a program or subprogram that computes a function, including arbitrarily complex simulation codes, into one that computes the derivatives of that function. We describe the implementation and application of automatic differentiation tools. We highlight recent advances in the combinatorial algorithms and compiler technology that underlie successful implementation of automatic differentiation tools. We discuss applications of automatic differentiation in design optimization and sensitivity analysis. We also describe ongoing research in the design of language-independent source transformation infrastructures for automatic differentiation algorithms

  3. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    Science.gov (United States)

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  4. QR Codes in the Library: Are They Worth the Effort? Analysis of a QR Code Pilot Project

    OpenAIRE

    Wilson, Andrew M.

    2012-01-01

    The literature is filled with potential uses for Quick Response (QR) codes in the library. Setting, but few library QR code projects have publicized usage statistics. A pilot project carried out in the Eda Kuhn Loeb Music Library of the Harvard College Library sought to determine whether library patrons actually understand and use QR codes. Results and analysis of the pilot project are provided, attempting to answer the question as to whether QR codes are worth the effort for libraries.

  5. Comprehensive Feature-Based Landscape Analysis of Continuous and Constrained Optimization Problems Using the R-Package flacco

    OpenAIRE

    Kerschke, Pascal

    2017-01-01

    Choosing the best-performing optimizer(s) out of a portfolio of optimization algorithms is usually a difficult and complex task. It gets even worse, if the underlying functions are unknown, i.e., so-called Black-Box problems, and function evaluations are considered to be expensive. In the case of continuous single-objective optimization problems, Exploratory Landscape Analysis (ELA) - a sophisticated and effective approach for characterizing the landscapes of such problems by means of numeric...

  6. Exergy Analysis and Optimization of an Alpha Type Stirling Engine Using the Implicit Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    James A. Wills

    2017-12-01

    Full Text Available This paper presents the exergy analysis and optimization of the Stirling engine, which has enormous potential for use in the renewable energy industry as it is quiet, efficient, and can operate with a variety of different heat sources and, therefore, has multi-fuel capabilities. This work aims to present a method that can be used by a Stirling engine designer to quickly and efficiently find near-optimal or optimal Stirling engine geometry and operating conditions. The model applies the exergy analysis methodology to the ideal-adiabatic Stirling engine model. In the past, this analysis technique has only been applied to highly idealized Stirling cycle models and this study shows its use in the realm of Stirling cycle optimization when applied to a more complex model. The implicit filtering optimization algorithm is used to optimize the engine as it quickly and efficiently computes the optimal geometry and operating frequency that gives maximum net-work output at a fixed energy input. A numerical example of a 1,000 cm3 engine is presented, where the geometry and operating frequency of the engine are optimized for four different regenerator mesh types, varying heater inlet temperature and a fixed energy input of 15 kW. The WN200 mesh is seen to perform best of the four mesh types analyzed, giving the greatest net-work output and efficiency. The optimal values of several different engine parameters are presented in the work. It is shown that the net-work output and efficiency increase with increasing heater inlet temperature. The optimal dead-volume ratio, swept volume ratio, operating frequency, and phase angle are all shown to decrease with increasing heater inlet temperature. In terms of the heat exchanger geometry, the heater and cooler tubes are seen to decrease in size and the cooler and heater effectiveness is seen to decrease with increasing heater temperature, whereas the regenerator is seen to increase in size and effectiveness. In

  7. Systems analysis as a tool for optimal process strategy

    International Nuclear Information System (INIS)

    Ditterich, K.; Schneider, J.

    1975-09-01

    For the description and the optimal treatment of complex processes, the methods of Systems Analysis are used as the most promising approach in recent times. In general every process should be optimised with respect to reliability, safety, economy and environmental pollution. In this paper the complex relations between these general optimisation postulates are established in qualitative form. These general trend relations have to be quantified for every particular system studied in practice

  8. GPU based Monte Carlo for PET image reconstruction: parameter optimization

    International Nuclear Information System (INIS)

    Cserkaszky, Á; Légrády, D.; Wirth, A.; Bükki, T.; Patay, G.

    2011-01-01

    This paper presents the optimization of a fully Monte Carlo (MC) based iterative image reconstruction of Positron Emission Tomography (PET) measurements. With our MC re- construction method all the physical effects in a PET system are taken into account thus superior image quality is achieved in exchange for increased computational effort. The method is feasible because we utilize the enormous processing power of Graphical Processing Units (GPUs) to solve the inherently parallel problem of photon transport. The MC approach regards the simulated positron decays as samples in mathematical sums required in the iterative reconstruction algorithm, so to complement the fast architecture, our work of optimization focuses on the number of simulated positron decays required to obtain sufficient image quality. We have achieved significant results in determining the optimal number of samples for arbitrary measurement data, this allows to achieve the best image quality with the least possible computational effort. Based on this research recommendations can be given for effective partitioning of computational effort into the iterations in limited time reconstructions. (author)

  9. A multi-fidelity analysis selection method using a constrained discrete optimization formulation

    Science.gov (United States)

    Stults, Ian C.

    uncertainty present in analyses with 4 or fewer input variables could be effectively quantified using a strategic distribution creation method; if more than 4 input variables exist, a Frontier Finding Particle Swarm Optimization should instead be used. Once model uncertainty in contributing analysis code choices has been quantified, a selection method is required to determine which of these choices should be used in simulations. Because much of the selection done for engineering problems is driven by the physics of the problem, these are poor candidate problems for testing the true fitness of a candidate selection method. Specifically moderate and high dimensional problems' variability can often be reduced to only a few dimensions and scalability often cannot be easily addressed. For these reasons a simple academic function was created for the uncertainty quantification, and a canonical form of the Fidelity Selection Problem (FSP) was created. Fifteen best- and worst-case scenarios were identified in an effort to challenge the candidate selection methods both with respect to the characteristics of the tradeoff between time cost and model uncertainty and with respect to the stringency of the constraints and problem dimensionality. The results from this experiment show that a Genetic Algorithm (GA) was able to consistently find the correct answer, but under certain circumstances, a discrete form of Particle Swarm Optimization (PSO) was able to find the correct answer more quickly. To better illustrate how the uncertainty quantification and discrete optimization might be conducted for a "real world" problem, an illustrative example was conducted using gas turbine engines.

  10. A Priori Implementation Effort Estimation for HW Design Based on Independent-Path Analysis

    DEFF Research Database (Denmark)

    Abildgren, Rasmus; Diguet, Jean-Philippe; Bomel, Pierre

    2008-01-01

    that with the proposed approach it is possible to estimate the hardware implementation effort. This approach, part of our light design space exploration concept, is implemented in our framework ‘‘Design-Trotter'' and offers a new type of tool that can help designers and managers to reduce the time-to-market factor......This paper presents a metric-based approach for estimating the hardware implementation effort (in terms of time) for an application in relation to the number of linear-independent paths of its algorithms. We exploit the relation between the number of edges and linear-independent paths...... in an algorithm and the corresponding implementation effort. We propose an adaptation of the concept of cyclomatic complexity, complemented with a correction function to take designers' learning curve and experience into account. Our experimental results, composed of a training and a validation phase, show...

  11. Exergy analysis, parametric analysis and optimization for a novel combined power and ejector refrigeration cycle

    International Nuclear Information System (INIS)

    Dai Yiping; Wang Jiangfeng; Gao Lin

    2009-01-01

    A new combined power and refrigeration cycle is proposed, which combines the Rankine cycle and the ejector refrigeration cycle. This combined cycle produces both power output and refrigeration output simultaneously. It can be driven by the flue gas of gas turbine or engine, solar energy, geothermal energy and industrial waste heats. An exergy analysis is performed to guide the thermodynamic improvement for this cycle. And a parametric analysis is conducted to evaluate the effects of the key thermodynamic parameters on the performance of the combined cycle. In addition, a parameter optimization is achieved by means of genetic algorithm to reach the maximum exergy efficiency. The results show that the biggest exergy loss due to the irreversibility occurs in heat addition processes, and the ejector causes the next largest exergy loss. It is also shown that the turbine inlet pressure, the turbine back pressure, the condenser temperature and the evaporator temperature have significant effects on the turbine power output, refrigeration output and exergy efficiency of the combined cycle. The optimized exergy efficiency is 27.10% under the given condition.

  12. Optimization of rainfall networks using information entropy and temporal variability analysis

    Science.gov (United States)

    Wang, Wenqi; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Liu, Jiufu; Zou, Ying; He, Ruimin

    2018-04-01

    Rainfall networks are the most direct sources of precipitation data and their optimization and evaluation are essential and important. Information entropy can not only represent the uncertainty of rainfall distribution but can also reflect the correlation and information transmission between rainfall stations. Using entropy this study performs optimization of rainfall networks that are of similar size located in two big cities in China, Shanghai (in Yangtze River basin) and Xi'an (in Yellow River basin), with respect to temporal variability analysis. Through an easy-to-implement greedy ranking algorithm based on the criterion called, Maximum Information Minimum Redundancy (MIMR), stations of the networks in the two areas (each area is further divided into two subareas) are ranked during sliding inter-annual series and under different meteorological conditions. It is found that observation series with different starting days affect the ranking, alluding to the temporal variability during network evaluation. We propose a dynamic network evaluation framework for considering temporal variability, which ranks stations under different starting days with a fixed time window (1-year, 2-year, and 5-year). Therefore, we can identify rainfall stations which are temporarily of importance or redundancy and provide some useful suggestions for decision makers. The proposed framework can serve as a supplement for the primary MIMR optimization approach. In addition, during different periods (wet season or dry season) the optimal network from MIMR exhibits differences in entropy values and the optimal network from wet season tended to produce higher entropy values. Differences in spatial distribution of the optimal networks suggest that optimizing the rainfall network for changing meteorological conditions may be more recommended.

  13. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment

    Directory of Open Access Journals (Sweden)

    Mahmoud Shakouri

    2016-03-01

    Full Text Available The amount of electricity generated by Photovoltaic (PV systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in Supplementary materials. The application of these files can be generalized to variety of communities interested in investing on PV systems. Keywords: Community solar, Photovoltaic system, Portfolio theory, Energy optimization, Electricity volatility

  14. Co-Optimization of Fuels and Engines (Co-Optima) -- Introduction

    Energy Technology Data Exchange (ETDEWEB)

    Farrell, John T [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wagner, Robert [Oak Ridge National Laboratory; Holladay, John [Pacific Northwest National Laboratory

    2017-08-11

    The Co-Optimization of Fuels and Engines (Co-Optima) initiative is a U.S. Department of Energy (DOE) effort funded by both the Vehicle and Bioenergy Technology Offices. The overall goal of the effort is to identify the combinations of fuel properties and engine characteristics that maximize efficiency, independent of production pathway or fuel composition, and accelerate commercialization of these technologies. Multiple research efforts are underway focused on both spark-ignition and compression-ignition strategies applicable across the entire light, medium, and heavy-duty fleet. A key objective of Co-Optima's research is to identify new blendstocks that enhance current petroleum blending components, increase blendstock diversity, and provide refiners with increased flexibility to blend fuels with the key properties required to optimize advanced internal combustion engines. In addition to fuels and engines R&D, the initiative is guided by analyses assessing the near-term commercial feasibility of new blendstocks based on economics, environmental performance, compatibility, and large-scale production viability. This talk will provide an overview of the Co-Optima effort.

  15. Optimizing Endoscope Reprocessing Resources Via Process Flow Queuing Analysis.

    Science.gov (United States)

    Seelen, Mark T; Friend, Tynan H; Levine, Wilton C

    2018-05-04

    The Massachusetts General Hospital (MGH) is merging its older endoscope processing facilities into a single new facility that will enable high-level disinfection of endoscopes for both the ORs and Endoscopy Suite, leveraging economies of scale for improved patient care and optimal use of resources. Finalized resource planning was necessary for the merging of facilities to optimize staffing and make final equipment selections to support the nearly 33,000 annual endoscopy cases. To accomplish this, we employed operations management methodologies, analyzing the physical process flow of scopes throughout the existing Endoscopy Suite and ORs and mapping the future state capacity of the new reprocessing facility. Further, our analysis required the incorporation of historical case and reprocessing volumes in a multi-server queuing model to identify any potential wait times as a result of the new reprocessing cycle. We also performed sensitivity analysis to understand the impact of future case volume growth. We found that our future-state reprocessing facility, given planned capital expenditures for automated endoscope reprocessors (AERs) and pre-processing sinks, could easily accommodate current scope volume well within the necessary pre-cleaning-to-sink reprocessing time limit recommended by manufacturers. Further, in its current planned state, our model suggested that the future endoscope reprocessing suite at MGH could support an increase in volume of at least 90% over the next several years. Our work suggests that with simple mathematical analysis of historic case data, significant changes to a complex perioperative environment can be made with ease while keeping patient safety as the top priority.

  16. A Bayesian Approach to Integrate Real-Time Data into Probabilistic Risk Analysis of Remediation Efforts in NAPL Sites

    Science.gov (United States)

    Fernandez-Garcia, D.; Sanchez-Vila, X.; Bolster, D.; Tartakovsky, D. M.

    2010-12-01

    The release of non-aqueous phase liquids (NAPLs) such as petroleum hydrocarbons and chlorinated solvents in the subsurface is a severe source of groundwater and vapor contamination. Because these liquids are essentially immiscible due to low solubility, these contaminants get slowly dissolved in groundwater and/or volatilized in the vadoze zone threatening the environment and public health over a long period. Many remediation technologies and strategies have been developed in the last decades for restoring the water quality properties of these contaminated sites. The failure of an on-site treatment technology application is often due to the unnoticed presence of dissolved NAPL entrapped in low permeability areas (heterogeneity) and/or the remaining of substantial amounts of pure phase after remediation efforts. Full understanding of the impact of remediation efforts is complicated due to the role of many interlink physical and biochemical processes taking place through several potential pathways of exposure to multiple receptors in a highly unknown heterogeneous environment. Due to these difficulties, the design of remediation strategies and definition of remediation endpoints have been traditionally determined without quantifying the risk associated with the failure of such efforts. We conduct a probabilistic risk analysis (PRA) of the likelihood of success of an on-site NAPL treatment technology that easily integrates all aspects of the problem (causes, pathways, and receptors) without doing extensive modeling. Importantly, the method is further capable to incorporate the inherent uncertainty that often exist in the exact location where the dissolved NAPL plume leaves the source zone. This is achieved by describing the failure of the system as a function of this source zone exit location, parameterized in terms of a vector of parameters. Using a Bayesian interpretation of the system and by means of the posterior multivariate distribution, the failure of the

  17. Optimizing transformations for automated, high throughput analysis of flow cytometry data.

    Science.gov (United States)

    Finak, Greg; Perez, Juan-Manuel; Weng, Andrew; Gottardo, Raphael

    2010-11-04

    In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. We compare the performance of parameter-optimized and default-parameter (in flowCore) data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized transformations improve visualization, reduce

  18. Optimizing transformations for automated, high throughput analysis of flow cytometry data

    Directory of Open Access Journals (Sweden)

    Weng Andrew

    2010-11-01

    Full Text Available Abstract Background In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. Results We compare the performance of parameter-optimized and default-parameter (in flowCore data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized

  19. Thermal performance analysis of optimized hexagonal finned heat sinks in impinging air jet

    Energy Technology Data Exchange (ETDEWEB)

    Yakut, Kenan, E-mail: kyakut@atauni.edu.tr [Department of Mechanical Engineering, Faculty of Engineering, Atatürk University, 25100, Erzurum (Turkey); Yeşildal, Faruk, E-mail: fayesildal@agri.edu.tr [Department of Mechanical Engineering, Faculty of Patnos Sultan Alparslan Natural Sciences and Engineering, Ağrı İbrahim Çeçen University, 04100, Ağrı (Turkey); Karabey, Altuğ, E-mail: akarabey@yyu.edu.tr [Department of Machinery and Metal Technology, Erciş Vocational High School, Yüzüncü Yıl University, 65400, Van (Turkey); Yakut, Rıdvan, E-mail: ryakut@kafkas.edu.tr [Department of Mechanical Engineering, Faculty of Engineering and Architecture, Kafkas University, 36100, Kars (Turkey)

    2016-04-18

    In this study, thermal performance analysis of hexagonal finned heat sinks which optimized according to the experimental design and optimization method of Taguchi were investigated. Experiments of air jet impingement on heated hexagonal finned heat sinks were carried out adhering to the L{sub 18}(2{sup 1*}3{sup 6}) orthogonal array test plan. Optimum geometries were determined and named OH-1, OH-2. Enhancement efficiency with the first law of thermodynamics was analyzed for optimized heat sinks with 100, 150, 200 mm heights of hexagonal fin. Nusselt correlations were found out and variations of enhancement efficiency with Reynolds number presented in η–Re graphics.

  20. Analysis and optimization of fault-tolerant embedded systems with hardened processors

    DEFF Research Database (Denmark)

    Izosimov, Viacheslav; Polian, Ilia; Pop, Paul

    2009-01-01

    In this paper we propose an approach to the design optimization of fault-tolerant hard real-time embedded systems, which combines hardware and software fault tolerance techniques. We trade-off between selective hardening in hardware and process reexecution in software to provide the required levels...... of fault tolerance against transient faults with the lowest-possible system costs. We propose a system failure probability (SFP) analysis that connects the hardening level with the maximum number of reexecutions in software. We present design optimization heuristics, to select the fault......-tolerant architecture and decide process mapping such that the system cost is minimized, deadlines are satisfied, and the reliability requirements are fulfilled....

  1. The impact of effort-reward imbalance and learning motivation on teachers' sickness absence.

    Science.gov (United States)

    Derycke, Hanne; Vlerick, Peter; Van de Ven, Bart; Rots, Isabel; Clays, Els

    2013-02-01

    The aim of this study was to analyse the impact of the effort-reward imbalance and learning motivation on sickness absence duration and sickness absence frequency among beginning teachers in Flanders (Belgium). A total of 603 teachers, who recently graduated, participated in this study. Effort-reward imbalance and learning motivation were assessed by means of self-administered questionnaires. Prospective data of registered sickness absence during 12 months follow-up were collected. Multivariate logistic regression analyses were performed. An imbalance between high efforts and low rewards (extrinsic hypothesis) was associated with longer sickness absence duration and more frequent absences. A low level of learning motivation (intrinsic hypothesis) was not associated with longer sickness absence duration but was significantly positively associated with sickness absence frequency. No significant results were obtained for the interaction hypothesis between imbalance and learning motivation. Further research is needed to deepen our understanding of the impact of psychosocial work conditions and personal resources on both sickness absence duration and frequency. Specifically, attention could be given to optimizing or reducing efforts spent at work, increasing rewards and stimulating learning motivation to influence sickness absence. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Software integration for automated stability analysis and design optimization of a bearingless rotor blade

    Science.gov (United States)

    Gunduz, Mustafa Emre

    Many government agencies and corporations around the world have found the unique capabilities of rotorcraft indispensable. Incorporating such capabilities into rotorcraft design poses extra challenges because it is a complicated multidisciplinary process. The concept of applying several disciplines to the design and optimization processes may not be new, but it does not currently seem to be widely accepted in industry. The reason for this might be the lack of well-known tools for realizing a complete multidisciplinary design and analysis of a product. This study aims to propose a method that enables engineers in some design disciplines to perform a fairly detailed analysis and optimization of a design using commercially available software as well as codes developed at Georgia Tech. The ultimate goal is when the system is set up properly, the CAD model of the design, including all subsystems, will be automatically updated as soon as a new part or assembly is added to the design; or it will be updated when an analysis and/or an optimization is performed and the geometry needs to be modified. Designers and engineers will be involved in only checking the latest design for errors or adding/removing features. Such a design process will take dramatically less time to complete; therefore, it should reduce development time and costs. The optimization method is demonstrated on an existing helicopter rotor originally designed in the 1960's. The rotor is already an effective design with novel features. However, application of the optimization principles together with high-speed computing resulted in an even better design. The objective function to be minimized is related to the vibrations of the rotor system under gusty wind conditions. The design parameters are all continuous variables. Optimization is performed in a number of steps. First, the most crucial design variables of the objective function are identified. With these variables, Latin Hypercube Sampling method is used

  3. A new approach to nuclear reactor design optimization using genetic algorithms and regression analysis

    International Nuclear Information System (INIS)

    Kumar, Akansha; Tsvetkov, Pavel V.

    2015-01-01

    Highlights: • This paper presents a new method useful for the optimization of complex dynamic systems. • The method uses the strengths of; genetic algorithms (GA), and regression splines. • The method is applied to the design of a gas cooled fast breeder reactor design. • Tools like Java, R, and codes like MCNP, Matlab are used in this research. - Abstract: A module based optimization method using genetic algorithms (GA), and multivariate regression analysis has been developed to optimize a set of parameters in the design of a nuclear reactor. GA simulates natural evolution to perform optimization, and is widely used in recent times by the scientific community. The GA fits a population of random solutions to the optimized solution of a specific problem. In this work, we have developed a genetic algorithm to determine the values for a set of nuclear reactor parameters to design a gas cooled fast breeder reactor core including a basis thermal–hydraulics analysis, and energy transfer. Multivariate regression is implemented using regression splines (RS). Reactor designs are usually complex and a simulation needs a significantly large amount of time to execute, hence the implementation of GA or any other global optimization techniques is not feasible, therefore we present a new method of using RS in conjunction with GA. Due to using RS, we do not necessarily need to run the neutronics simulation for all the inputs generated from the GA module rather, run the simulations for a predefined set of inputs, build a multivariate regression fit to the input and the output parameters, and then use this fit to predict the output parameters for the inputs generated by GA. The reactor parameters are given by the, radius of a fuel pin cell, isotopic enrichment of the fissile material in the fuel, mass flow rate of the coolant, and temperature of the coolant at the core inlet. And, the optimization objectives for the reactor core are, high breeding of U-233 and Pu-239 in

  4. The contribution of particle swarm optimization to three-dimensional slope stability analysis.

    Science.gov (United States)

    Kalatehjari, Roohollah; Rashid, Ahmad Safuan A; Ali, Nazri; Hajihassani, Mohsen

    2014-01-01

    Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes.

  5. The Contribution of Particle Swarm Optimization to Three-Dimensional Slope Stability Analysis

    Science.gov (United States)

    A Rashid, Ahmad Safuan; Ali, Nazri

    2014-01-01

    Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes. PMID:24991652

  6. The Contribution of Particle Swarm Optimization to Three-Dimensional Slope Stability Analysis

    Directory of Open Access Journals (Sweden)

    Roohollah Kalatehjari

    2014-01-01

    Full Text Available Over the last few years, particle swarm optimization (PSO has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D slope stability analysis. This paper applied PSO in three-dimensional (3D slope stability problem to determine the critical slip surface (CSS of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes.

  7. Optimal patch code design via device characterization

    Science.gov (United States)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  8. Maximum effort in the minimum-effort game

    Czech Academy of Sciences Publication Activity Database

    Engelmann, Dirk; Normann, H.-T.

    2010-01-01

    Roč. 13, č. 3 (2010), s. 249-259 ISSN 1386-4157 Institutional research plan: CEZ:AV0Z70850503 Keywords : minimum-effort game * coordination game * experiments * social capital Subject RIV: AH - Economics Impact factor: 1.868, year: 2010

  9. Design and analysis of stochastic DSS query optimizers in a distributed database system

    Directory of Open Access Journals (Sweden)

    Manik Sharma

    2016-07-01

    Full Text Available Query optimization is a stimulating task of any database system. A number of heuristics have been applied in recent times, which proposed new algorithms for substantially improving the performance of a query. The hunt for a better solution still continues. The imperishable developments in the field of Decision Support System (DSS databases are presenting data at an exceptional rate. The massive volume of DSS data is consequential only when it is able to access and analyze by distinctive researchers. Here, an innovative stochastic framework of DSS query optimizer is proposed to further optimize the design of existing query optimization genetic approaches. The results of Entropy Based Restricted Stochastic Query Optimizer (ERSQO are compared with the results of Exhaustive Enumeration Query Optimizer (EAQO, Simple Genetic Query Optimizer (SGQO, Novel Genetic Query Optimizer (NGQO and Restricted Stochastic Query Optimizer (RSQO. In terms of Total Costs, EAQO outperforms SGQO, NGQO, RSQO and ERSQO. However, stochastic approaches dominate in terms of runtime. The Total Costs produced by ERSQO is better than SGQO, NGQO and RGQO by 12%, 8% and 5% respectively. Moreover, the effect of replicating data on the Total Costs of DSS query is also examined. In addition, the statistical analysis revealed a 2-tailed significant correlation between the number of join operations and the Total Costs of distributed DSS query. Finally, in regard to the consistency of stochastic query optimizers, the results of SGQO, NGQO, RSQO and ERSQO are 96.2%, 97.2%, 97.45 and 97.8% consistent respectively.

  10. Optimizing Mars Sphere of Influence Maneuvers for NASA's Evolvable Mars Campaign

    Science.gov (United States)

    Merrill, Raymond G.; Komar, D. R.; Chai, Patrick; Qu, Min

    2016-01-01

    NASA's Human Spaceflight Architecture Team is refining human exploration architectures that will extend human presence to the Martian surface. For both Mars orbital and surface missions, NASA's Evolvable Mars Campaign assumes that cargo and crew can be delivered repeatedly to the same destination. Up to this point, interplanetary trajectories have been optimized to minimize the total propulsive requirements of the in-space transportation systems, while the pre-deployed assets and surface systems are optimized to minimize their respective propulsive requirements separate from the in-space transportation system. There is a need to investigate the coupled problem of optimizing the interplanetary trajectory and optimizing the maneuvers within Mars's sphere of influence. This paper provides a description of the ongoing method development, analysis and initial results of the effort to resolve the discontinuity between the interplanetary trajectory and the Mars sphere of influence trajectories. Assessment of Phobos and Deimos orbital missions shows the in-space transportation and crew taxi allocations are adequate for missions in the 2030s. Because the surface site has yet to be selected, the transportation elements must be sized to provide enough capability to provide surface access to all landing sites under consideration. Analysis shows access to sites from elliptical parking orbits with a lander that is designed for sub-periapsis landing location is either infeasible or requires expensive orbital maneuvers for many latitude ranges. In this case the locus of potential arrival perigee vectors identifies the potential maximum north or south latitudes accessible. Higher arrival velocities can decrease reorientation costs and increase landing site availability. Utilizing hyperbolic arrival and departure vectors in the optimization scheme will increase transportation site accessibility and provide more optimal solutions.

  11. Optimization Studies for the H $\\rightarrow$ WW Boosted Decision Tree Analysis

    CERN Document Server

    Strickland, Jessica

    2014-01-01

    The aim of this project was to follow the ATLAS $H \\rightarrow WW$ BDT analysis and try to optimize training variables, pre-selection cuts, and training parameters such as the depth \\cite{orig, spin}. Machine learning was done with Monte Carlo samples of the $H \\rightarrow W^+ W^- \\rightarrow e \\mu \

  12. The optimization of the analysis of chlorine-36 in urine

    International Nuclear Information System (INIS)

    Joseph, S.; Kramer, G.H.

    1982-02-01

    A method has been developed and optimized for the analysis of chlorine-36 in urine. Problems such as sample size, photodecomposition of silver chloride and anion interferences have been solved and are discussed in detail. The analysis is performed by first removing interfering phosphates and sulphates from an untreated urine sample and isolating the chlorine-36 as silver chloride. The precipitate is counted in a planchet counter. Recoveries are estimated to be 90 +- 5% with a detection limit of 3 pCi (0.1 Bq) for a routine sample (counting time 10 minutes, counting efficiency 10%, sample size 100 mL)

  13. Portfolio optimization and performance evaluation

    DEFF Research Database (Denmark)

    Juhl, Hans Jørn; Christensen, Michael

    2013-01-01

    Based on an exclusive business-to-business database comprising nearly 1,000 customers, the applicability of portfolio analysis is documented, and it is examined how such an optimization analysis can be used to explore the growth potential of a company. As opposed to any previous analyses, optimal...... customer portfolios are determined, and it is shown how marketing decision-makers can use this information in their marketing strategies to optimize the revenue growth of the company. Finally, our analysis is the first analysis which applies portfolio based methods to measure customer performance......, and it is shown how these performance measures complement the optimization analysis....

  14. Optimization of a truck-drone in tandem delivery network using k-means and genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ferrandez, S. M.; Harbison, T.; Weber, T.; Sturges, R.; Rich, R.

    2016-07-01

    The purpose of this paper is to investigate the effectiveness of implementing unmanned aerial delivery vehicles in delivery networks. We investigate the notion of the reduced overall delivery time, energy, and costs for a truck-drone network by comparing the in-tandem system with a stand-alone delivery effort. The objectives are (1) to investigate the time, energy, and costs associated to a truck-drone delivery network compared to standalone truck or drone, (2) to propose an optimization algorithm that determines the optimal number of launch sites and locations given delivery requirements, and drones per truck, (3) to develop mathematical formulations for closed form estimations for the optimal number of launch locations, optimal total time, as well as the associated cost for the system. The design of the algorithm herein computes the minimal time of delivery utilizing K-means clustering to find launch locations, as well as a genetic algorithm to solve the truck route as a traveling salesmen problem (TSP). The optimal solution is determined by finding the minimum cost associated to the parabolic convex cost function. The optimal min-cost is determined by finding the most efficient launch locations using K-means algorithms to determine launch locations and a genetic algorithm to determine truck route between those launch locations. Results show improvements with in-tandem delivery efforts as opposed to standalone systems. Further, multiple drones per truck are more optimal and contribute to savings in both energy and time. For this, we sampled various initialization variables to derive closed form mathematical solutions for the problem. Ultimately, this provides the necessary analysis of an integrated truck-drone delivery system which could be implemented by a company in order to maximize deliveries while minimizing time and energy. Closed-form mathematical solutions can be used as close estimators for final costs and time. (Author)

  15. Optimized Enhanced Bioremediation Through 4D Geophysical Monitoring and Autonomous Data Collection, Processing and Analysis

    Science.gov (United States)

    2014-09-01

    ER-200717) Optimized Enhanced Bioremediation Through 4D Geophysical Monitoring and Autonomous Data Collection, Processing and Analysis...N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Optimized Enhanced Bioremediation Through 4D Geophysical Monitoring and Autonomous Data...8 2.1.2 The Geophysical Signatures of Bioremediation ......................................... 8 2.2 PRIOR

  16. Parametric analysis of energy quality management for district in China using multi-objective optimization approach

    International Nuclear Information System (INIS)

    Lu, Hai; Yu, Zitao; Alanne, Kari; Xu, Xu; Fan, Liwu; Yu, Han; Zhang, Liang; Martinac, Ivo

    2014-01-01

    Highlights: • A time-effective multi-objective design optimization scheme is proposed. • The scheme aims at exploring suitable 3E energy system for the specific case. • A realistic case located in China is used for the analysis. • Parametric study is investigated to test the effects of different parameters. - Abstract: Due to the increasing energy demands and global warming, energy quality management (EQM) for districts has been getting importance over the last few decades. The evaluation of the optimum energy systems for specific districts is an essential part of EQM. This paper presents a deep analysis of the optimum energy systems for a district sited in China. A multi-objective optimization approach based on Genetic Algorithm (GA) is proposed for the analysis. The optimization process aims to search for the suitable 3E (minimum economic cost and environmental burden as well as maximum efficiency) energy systems. Here, life cycle CO 2 equivalent (LCCO 2 ), life cycle cost (LCC) and exergy efficiency (EE) are set as optimization objectives. Then, the optimum energy systems for the Chinese case are presented. The final work is to investigate the effects of different energy parameters. The results show the optimum energy systems might vary significantly depending on some parameters

  17. Quantum approximate optimization algorithm for MaxCut: A fermionic view

    Science.gov (United States)

    Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.

    2018-02-01

    Farhi et al. recently proposed a class of quantum algorithms, the quantum approximate optimization algorithm (QAOA), for approximately solving combinatorial optimization problems (E. Farhi et al., arXiv:1411.4028; arXiv:1412.6062; arXiv:1602.07674). A level-p QAOA circuit consists of p steps; in each step a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2 p times for which these two Hamiltonians are applied are the parameters of the algorithm, which are to be optimized classically for the best performance. As p increases, parameter optimization becomes inefficient due to the curse of dimensionality. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here we analytically and numerically study parameter setting for the QAOA applied to MaxCut. For the level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MaxCut, the "ring of disagrees," or the one-dimensional antiferromagnetic ring, we provide an analysis for an arbitrarily high level. Using a fermionic representation, the evolution of the system under the QAOA translates into quantum control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of the QAOA for any p . It also greatly simplifies the numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional submanifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.

  18. Optimal siting of capacitors in radial distribution network using Whale Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    D.B. Prakash

    2017-12-01

    Full Text Available In present days, continuous effort is being made in bringing down the line losses of the electrical distribution networks. Therefore proper allocation of capacitors is of utmost importance because, it will help in reducing the line losses and maintaining the bus voltage. This in turn results in improving the stability and reliability of the system. In this paper Whale Optimization Algorithm (WOA is used to find optimal sizing and placement of capacitors for a typical radial distribution system. Multi objectives such as operating cost reduction and power loss minimization with inequality constraints on voltage limits are considered and the proposed algorithm is validated by applying it on standard radial systems: IEEE-34 bus and IEEE-85 bus radial distribution test systems. The results obtained are compared with those of existing algorithms. The results show that the proposed algorithm is more effective in bringing down the operating costs and in maintaining better voltage profile. Keywords: Whale Optimization Algorithm (WOA, Optimal allocation and sizing of capacitors, Power loss reduction and voltage stability improvement, Radial distribution system, Operating cost minimization

  19. Electromagnetic Optimization Exploiting Aggressive Space Mapping

    DEFF Research Database (Denmark)

    Bandler, J. W.; Biernacki, R.; Chen, S.

    1995-01-01

    emerges after only six EM simulations with sparse frequency sweeps. Furthermore, less CPU effort is required to optimize the filter than is required by one single detailed frequency sweep. We also extend the SM concept to the parameter extraction phase, overcoming severely misaligned responses induced...

  20. Graph Analysis and Modularity of Brain Functional Connectivity Networks: Searching for the Optimal Threshold

    Directory of Open Access Journals (Sweden)

    Cécile Bordier

    2017-08-01

    Full Text Available Neuroimaging data can be represented as networks of nodes and edges that capture the topological organization of the brain connectivity. Graph theory provides a general and powerful framework to study these networks and their structure at various scales. By way of example, community detection methods have been widely applied to investigate the modular structure of many natural networks, including brain functional connectivity networks. Sparsification procedures are often applied to remove the weakest edges, which are the most affected by experimental noise, and to reduce the density of the graph, thus making it theoretically and computationally more tractable. However, weak links may also contain significant structural information, and procedures to identify the optimal tradeoff are the subject of active research. Here, we explore the use of percolation analysis, a method grounded in statistical physics, to identify the optimal sparsification threshold for community detection in brain connectivity networks. By using synthetic networks endowed with a ground-truth modular structure and realistic topological features typical of human brain functional connectivity networks, we show that percolation analysis can be applied to identify the optimal sparsification threshold that maximizes information on the networks' community structure. We validate this approach using three different community detection methods widely applied to the analysis of brain connectivity networks: Newman's modularity, InfoMap and Asymptotical Surprise. Importantly, we test the effects of noise and data variability, which are critical factors to determine the optimal threshold. This data-driven method should prove particularly useful in the analysis of the community structure of brain networks in populations characterized by different connectivity strengths, such as patients and controls.

  1. Optimal climate policy is a utopia. From quantitative to qualitative cost-benefit analysis

    International Nuclear Information System (INIS)

    Van den Bergh, Jeroen C.J.M.

    2004-01-01

    The dominance of quantitative cost-benefit analysis (CBA) and optimality concepts in the economic analysis of climate policy is criticised. Among others, it is argued to be based in a misplaced interpretation of policy for a complex climate-economy system as being analogous to individual inter-temporal welfare optimisation. The transfer of quantitative CBA and optimality concepts reflects an overly ambitious approach that does more harm than good. An alternative approach is to focus the attention on extreme events, structural change and complexity. It is argued that a qualitative rather than a quantitative CBA that takes account of these aspects can support the adoption of a minimax regret approach or precautionary principle in climate policy. This means: implement stringent GHG reduction policies as soon as possible

  2. Runtime analysis of the 1-ANT ant colony optimizer

    DEFF Research Database (Denmark)

    Doerr, Benjamin; Neumann, Frank; Sudholt, Dirk

    2011-01-01

    The runtime analysis of randomized search heuristics is a growing field where, in the last two decades, many rigorous results have been obtained. First runtime analyses of ant colony optimization (ACO) have been conducted only recently. In these studies simple ACO algorithms such as the 1-ANT...... that give us a more detailed impression of the 1-ANT’s performance. Furthermore, the experiments also deal with the question whether using many ant solutions in one iteration can decrease the total runtime....

  3. Human Hand Motion Analysis and Synthesis of Optimal Power Grasps for a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Francesca Cordella

    2014-03-01

    Full Text Available Biologically inspired robotic systems can find important applications in biomedical robotics, since studying and replicating human behaviour can provide new insights into motor recovery, functional substitution and human-robot interaction. The analysis of human hand motion is essential for collecting information about human hand movements useful for generalizing reaching and grasping actions on a robotic system. This paper focuses on the definition and extraction of quantitative indicators for describing optimal hand grasping postures and replicating them on an anthropomorphic robotic hand. A motion analysis has been carried out on six healthy human subjects performing a transverse volar grasp. The extracted indicators point to invariant grasping behaviours between the involved subjects, thus providing some constraints for identifying the optimal grasping configuration. Hence, an optimization algorithm based on the Nelder-Mead simplex method has been developed for determining the optimal grasp configuration of a robotic hand, grounded on the aforementioned constraints. It is characterized by a reduced computational cost. The grasp stability has been tested by introducing a quality index that satisfies the form-closure property. The grasping strategy has been validated by means of simulation tests and experimental trials on an arm-hand robotic system. The obtained results have shown the effectiveness of the extracted indicators to reduce the non-linear optimization problem complexity and lead to the synthesis of a grasping posture able to replicate the human behaviour while ensuring grasp stability. The experimental results have also highlighted the limitations of the adopted robotic platform (mainly due to the mechanical structure to achieve the optimal grasp configuration.

  4. Studies on multivariate autoregressive analysis using synthesized reactor noise-like data for optimal modelling

    Energy Technology Data Exchange (ETDEWEB)

    Ciftcioglu, O.; Hoogenboom, J.E.; Dam, H. van

    1988-01-01

    Studies on the multivariate autoregressive (MAR) analysis are carried out for the choice of the parameters for modelling the data obtained from various sensors optimally. Accordingly, the roles of the parameters on the analysis results are identified and the related ambiguities are reduced. Experimental investigations are carried out by means of synthesized reactor noise-like data obtained from a digital simulator providing simulated stochastic signals of an operating nuclear reactor so that the simulator constitutes a favourable tool for the present studies aimed. As the system is well defined with its known structure, precise comparison of the MAR analysis results with the true values is performed. With the help of the information gained through the studies carried out, conditions to be taken care of for optimal signal processing in MAR modelling are determined. Although the parameters involved are related among themselves and they have to be given different values suitable for the particular application in hand, some criteria, namely memory-time and sample length-time play an essential role in AR modelling and they are found to be applicable to each individual case commonly, for the establishment of the optimality.

  5. Studies on multivariate autoregressive analysis using synthesized reactor noise-like data for optimal modelling

    International Nuclear Information System (INIS)

    Ciftcioglu, O.

    1988-01-01

    Studies on the multivariate autoregressive (MAR) analysis are carried out for the choice of the parameters for modelling the data obtained from various sensors optimally. Accordingly, the roles of the parameters on the analysis results are identified and the related ambiguities are reduced. Experimental investigations are carried out by means of synthesized reactor noise-like data obtained from a digital simulator providing simulated stochastic signals of an operating nuclear reactor so that the simulator constitutes a favourable tool for the present studies aimed. As the system is well defined with its known structure, precise comparison of the MAR analysis results with the true values is performed. With the help of the information gained through the studies carried out, conditions to be taken care of for optimal signal processing in MAR modelling are determined. Although the parameters involved are related among themselves and they have to be given different values suitable for the particular application in hand, some criteria, namely memory-time and sample length-time play an essential role in AR modelling and they are found to be applicable to each individual case commonly, for the establishment of the optimality. (author)

  6. An Optimized DNA Analysis Workflow for the Sampling, Extraction, and Concentration of DNA obtained from Archived Latent Fingerprints.

    Science.gov (United States)

    Solomon, April D; Hytinen, Madison E; McClain, Aryn M; Miller, Marilyn T; Dawson Cruz, Tracey

    2018-01-01

    DNA profiles have been obtained from fingerprints, but there is limited knowledge regarding DNA analysis from archived latent fingerprints-touch DNA "sandwiched" between adhesive and paper. Thus, this study sought to comparatively analyze a variety of collection and analytical methods in an effort to seek an optimized workflow for this specific sample type. Untreated and treated archived latent fingerprints were utilized to compare different biological sampling techniques, swab diluents, DNA extraction systems, DNA concentration practices, and post-amplification purification methods. Archived latent fingerprints disassembled and sampled via direct cutting, followed by DNA extracted using the QIAamp® DNA Investigator Kit, and concentration with Centri-Sep™ columns increased the odds of obtaining an STR profile. Using the recommended DNA workflow, 9 of the 10 samples provided STR profiles, which included 7-100% of the expected STR alleles and two full profiles. Thus, with carefully selected procedures, archived latent fingerprints can be a viable DNA source for criminal investigations including cold/postconviction cases. © 2017 American Academy of Forensic Sciences.

  7. Optimization analysis of propulsion motor control efficiency

    Directory of Open Access Journals (Sweden)

    CAI Qingnan

    2017-12-01

    Full Text Available [Objectives] This paper aims to strengthen the control effect of propulsion motors and decrease the energy used during actual control procedures.[Methods] Based on the traditional propulsion motor equivalence circuit, we increase the iron loss current component, introduce the definition of power matching ratio, calculate the highest efficiency of a motor at a given speed and discuss the flux corresponding to the power matching ratio with the highest efficiency. In the original motor vector efficiency optimization control module, an efficiency optimization control module is added so as to achieve motor efficiency optimization and energy conservation.[Results] MATLAB/Simulink simulation data shows that the efficiency optimization control method is suitable for most conditions. The operation efficiency of the improved motor model is significantly higher than that of the original motor model, and its dynamic performance is good.[Conclusions] Our motor efficiency optimization control method can be applied in engineering to achieve energy conservation.

  8. Information entropies in antikaon-nucleon scattering and optimal state analysis

    International Nuclear Information System (INIS)

    Ion, D.B.; Ion, M.L.; Petrascu, C.

    1998-01-01

    It is known that Jaynes interpreted the entropy as the expected self-information of a class of mutually exclusive and exhaustive events, while the probability is considered to be the rational degree of belief we assign to events based on available experimental evidence. The axiomatic derivation of Jaynes principle of maximum entropy as well as of the Kullback principle of minimum cross-entropy have been reported. Moreover, the optimal states in the Hilbert space of the scattering amplitude, which are analogous to the coherent states from the Hilbert space of the wave functions, were introduced and developed. The possibility that each optimal state possesses a specific minimum entropic uncertainty relation similar to that of the coherent states was recently conjectured. In fact, the (angle and angular momenta) information entropies, as well as the entropic angle-angular momentum uncertainty relations, in the hadron-hadron scattering, are introduced. The experimental information entropies for the pion-nucleon scattering are calculated by using the available phase shift analyses. These results are compared with the information entropies of the optimal states. Then, the optimal state dominance in the pion-nucleon scattering is systematically observed for all P LAB = 0.02 - 10 GeV/c. Also, it is shown that the angle-angular momentum entropic uncertainty relations are satisfied with high accuracy by all the experimental information entropies. In this paper the (angle and angular momentum) information entropies of hadron-hadron scattering are experimentally investigated by using the antikaon-nucleon phase shift analysis. Then, it is shown that the experimental entropies are in agreement with the informational entropies of optimal states. The results obtained in this paper can be explained not only by the presence of an optimal background which accompanied the production of the elementary resonances but also by the presence of the optimal resonances. On the other hand

  9. Model-Based Individualized Treatment of Chemotherapeutics: Bayesian Population Modeling and Dose Optimization.

    Directory of Open Access Journals (Sweden)

    Devaraj Jayachandran

    Full Text Available 6-Mercaptopurine (6-MP is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN through enzymatic reaction involving thiopurine methyltransferase (TPMT. Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP's widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient's ability to metabolize the drug instead of the traditional standard-dose-for-all approach.

  10. Model-Based Individualized Treatment of Chemotherapeutics: Bayesian Population Modeling and Dose Optimization

    Science.gov (United States)

    Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami

    2015-01-01

    6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP’s widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient’s ability to metabolize the drug instead of the traditional standard-dose-for-all approach. PMID:26226448

  11. A Bioeconomic Analysis of Traditional Fisheries in the Red Sea

    KAUST Repository

    Jin, Di

    2012-06-15

    We undertake a bioeconomic analysis of the aggregate traditional fisheries in the Northern and Central areas of Red Sea off the coast of the Kingdom of Saudi Arabia (KSA). Results of our analysis using a Fox model and the Clarke-Yoshimoto-Pooley (CY&P) estimation procedure suggest that the aggregate traditional fisheries have been overfished since the early 1990s. The estimated stock size in recent years is as low as 6,400 MT, while the estimated stock size associated with the maximum economic yield (MEY) is 19,300 MT. The socially optimal level of fishing effort is about 139,000 days. Thus, the current effort level of 300,000 to 350,000 days constitutes a problem of overfishing. The estimated current total gross revenue from the traditional fisheries is Saudi Rials (SR) 147 million with zero net benefit. If total fishing effort is reduced to the socially optimal level, then we estimate gross revenue would be SR 167 million and the potential net benefit from the KSA Red Sea traditional fisheries could be as large as SR 111 million. Copyright © 2012 MRE Foundation, Inc.

  12. A Systematic Approach for Quantitative Analysis of Multidisciplinary Design Optimization Framework

    Science.gov (United States)

    Kim, Sangho; Park, Jungkeun; Lee, Jeong-Oog; Lee, Jae-Woo

    An efficient Multidisciplinary Design and Optimization (MDO) framework for an aerospace engineering system should use and integrate distributed resources such as various analysis codes, optimization codes, Computer Aided Design (CAD) tools, Data Base Management Systems (DBMS), etc. in a heterogeneous environment, and need to provide user-friendly graphical user interfaces. In this paper, we propose a systematic approach for determining a reference MDO framework and for evaluating MDO frameworks. The proposed approach incorporates two well-known methods, Analytic Hierarchy Process (AHP) and Quality Function Deployment (QFD), in order to provide a quantitative analysis of the qualitative criteria of MDO frameworks. Identification and hierarchy of the framework requirements and the corresponding solutions for the reference MDO frameworks, the general one and the aircraft oriented one were carefully investigated. The reference frameworks were also quantitatively identified using AHP and QFD. An assessment of three in-house frameworks was then performed. The results produced clear and useful guidelines for improvement of the in-house MDO frameworks and showed the feasibility of the proposed approach for evaluating an MDO framework without a human interference.

  13. Optimal allocation of conservation effort among subpopulations of a threatened species: how important is patch quality?

    Science.gov (United States)

    Chauvenet, Aliénor L M; Baxter, Peter W J; McDonald-Madden, Eve; Possingham, Hugh P

    2010-04-01

    Money is often a limiting factor in conservation, and attempting to conserve endangered species can be costly. Consequently, a framework for optimizing fiscally constrained conservation decisions for a single species is needed. In this paper we find the optimal budget allocation among isolated subpopulations of a threatened species to minimize local extinction probability. We solve the problem using stochastic dynamic programming, derive a useful and simple alternative guideline for allocating funds, and test its performance using forward simulation. The model considers subpopulations that persist in habitat patches of differing quality, which in our model is reflected in different relationships between money invested and extinction risk. We discover that, in most cases, subpopulations that are less efficient to manage should receive more money than those that are more efficient to manage, due to higher investment needed to reduce extinction risk. Our simple investment guideline performs almost as well as the exact optimal strategy. We illustrate our approach with a case study of the management of the Sumatran tiger, Panthera tigris sumatrae, in Kerinci Seblat National Park (KSNP), Indonesia. We find that different budgets should be allocated to the separate tiger subpopulations in KSNP. The subpopulation that is not at risk of extinction does not require any management investment. Based on the combination of risks of extinction and habitat quality, the optimal allocation for these particular tiger subpopulations is an unusual case: subpopulations that occur in higher-quality habitat (more efficient to manage) should receive more funds than the remaining subpopulation that is in lower-quality habitat. Because the yearly budget allocated to the KSNP for tiger conservation is small, to guarantee the persistence of all the subpopulations that are currently under threat we need to prioritize those that are easier to save. When allocating resources among subpopulations

  14. Methodological Framework for Analysis of Buildings-Related Programs: The GPRA Metrics Effort

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, Douglas B.; Anderson, Dave M.; Belzer, David B.; Cort, Katherine A.; Dirks, James A.; Hostick, Donna J.

    2004-06-18

    The requirements of the Government Performance and Results Act (GPRA) of 1993 mandate the reporting of outcomes expected to result from programs of the Federal government. The U.S. Department of Energy’s (DOE’s) Office of Energy Efficiency and Renewable Energy (EERE) develops official metrics for its 11 major programs using its Office of Planning, Budget Formulation, and Analysis (OPBFA). OPBFA conducts an annual integrated modeling analysis to produce estimates of the energy, environmental, and financial benefits expected from EERE’s budget request. Two of EERE’s major programs include the Building Technologies Program (BT) and Office of Weatherization and Intergovernmental Program (WIP). Pacific Northwest National Laboratory (PNNL) supports the OPBFA effort by developing the program characterizations and other market information affecting these programs that is necessary to provide input to the EERE integrated modeling analysis. Throughout the report we refer to these programs as “buildings-related” programs, because the approach is not limited in application to BT or WIP. To adequately support OPBFA in the development of official GPRA metrics, PNNL communicates with the various activities and projects in BT and WIP to determine how best to characterize their activities planned for the upcoming budget request. PNNL then analyzes these projects to determine what the results of the characterizations would imply for energy markets, technology markets, and consumer behavior. This is accomplished by developing nonintegrated estimates of energy, environmental, and financial benefits (i.e., outcomes) of the technologies and practices expected to result from the budget request. These characterizations and nonintegrated modeling results are provided to OPBFA as inputs to the official benefits estimates developed for the Federal Budget. This report documents the approach and methodology used to estimate future energy, environmental, and financial benefits

  15. On the complexity of determining tolerances for ->e--optimal solutions to min-max combinatorial optimization problems

    NARCIS (Netherlands)

    Ghosh, D.; Sierksma, G.

    2000-01-01

    Sensitivity analysis of e-optimal solutions is the problem of calculating the range within which a problem parameter may lie so that the given solution re-mains e-optimal. In this paper we study the sensitivity analysis problem for e-optimal solutions tocombinatorial optimization problems with

  16. Intelligent flame analysis for an optimized combustion

    Energy Technology Data Exchange (ETDEWEB)

    Stephan Peper; Dirk Schmidt [ABB Utilities GmbH, Mainheimm (Germany)

    2003-07-01

    One of the primary challenges in the area of process control is to ensure that many competing optimization goals are accomplished at the same time and be considered in time. This paper describes a successful approach through the use of an advanced pattern recognition technology and intelligent optimization tool modeling combustion processes more precisely and optimizing them based on a holistic view. 17 PowerPoint slides are also available in the proceedings. 5 figs., 1 tab.

  17. Working from Home - What is the Effect on Employees' Effort?

    OpenAIRE

    Rupietta, Kira; Beckmann, Michael

    2016-01-01

    This paper investigates how working from home affects employees' work effort. Employees, who have the possibility to work from home, have a high autonomy in scheduling their work and therefore are assumed to have a higher intrinsic motivation. Thus, we expect working from home to positively influence work effort of employees. For the empirical analysis we use the German Socio-Economic Panel (SOEP). To account for self-selection into working locations we use an instrumental variable (IV) estim...

  18. Evolutionary Computing for Intelligent Power System Optimization and Control

    DEFF Research Database (Denmark)

    This new book focuses on how evolutionary computing techniques benefit engineering research and development tasks by converting practical problems of growing complexities into simple formulations, thus largely reducing development efforts. This book begins with an overview of the optimization the...... theory and modern evolutionary computing techniques, and goes on to cover specific applications of evolutionary computing to power system optimization and control problems....

  19. Sensitivity analysis and multidisciplinary optimization for aircraft design: Recent advances and results

    Science.gov (United States)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.

  20. Hearing Impairment and Cognitive Energy: The Framework for Understanding Effortful Listening (FUEL).

    Science.gov (United States)

    Pichora-Fuller, M Kathleen; Kramer, Sophia E; Eckert, Mark A; Edwards, Brent; Hornsby, Benjamin W Y; Humes, Larry E; Lemke, Ulrike; Lunner, Thomas; Matthen, Mohan; Mackersie, Carol L; Naylor, Graham; Phillips, Natalie A; Richter, Michael; Rudner, Mary; Sommers, Mitchell S; Tremblay, Kelly L; Wingfield, Arthur

    2016-01-01

    The Fifth Eriksholm Workshop on "Hearing Impairment and Cognitive Energy" was convened to develop a consensus among interdisciplinary experts about what is known on the topic, gaps in knowledge, the use of terminology, priorities for future research, and implications for practice. The general term cognitive energy was chosen to facilitate the broadest possible discussion of the topic. It goes back to who described the effects of attention on perception; he used the term psychic energy for the notion that limited mental resources can be flexibly allocated among perceptual and mental activities. The workshop focused on three main areas: (1) theories, models, concepts, definitions, and frameworks; (2) methods and measures; and (3) knowledge translation. We defined effort as the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a task, with listening effort applying more specifically when tasks involve listening. We adapted Kahneman's seminal (1973) Capacity Model of Attention to listening and proposed a heuristically useful Framework for Understanding Effortful Listening (FUEL). Our FUEL incorporates the well-known relationship between cognitive demand and the supply of cognitive capacity that is the foundation of cognitive theories of attention. Our FUEL also incorporates a motivation dimension based on complementary theories of motivational intensity, adaptive gain control, and optimal performance, fatigue, and pleasure. Using a three-dimensional illustration, we highlight how listening effort depends not only on hearing difficulties and task demands but also on the listener's motivation to expend mental effort in the challenging situations of everyday life.

  1. OPTIMIZATION OF A HPLC ANALYSIS METHOD FOR TAURINE AND CAFFEINE IN ENERGY DRINKS

    Directory of Open Access Journals (Sweden)

    RALUCA-IOANA [CHIRITA] TAMPU

    2018-03-01

    Full Text Available This paper presents the optimization of a rapid, inexpensive, reliable and selective isocratic high performance liquid chromatographic (HPLC method for the simultaneous determination of caffeine and taurine in energy drinks with two common detectors in series: evaporating light scattering detector (ELSD and an ultraviolet (UV detector. Satisfactory analysis results were obtained on an Astec apHera NH2 column using methanol/water (30:70 v/v as mobile phase. The optimized method was used for the analysis of commercial energy drinks containing large amounts of carbohydrates (100 g·L-1 and considerably lower amounts of taurine and caffeine (4 and 0.6 g·L-1, respectively. The advantages of this method consist of its lack of preliminary samples treatment and also the fact that basic LC instrumentation was employed.

  2. Solar Flare Prediction Science-to-Operations: the ESA/SSA SWE A-EFFort Service

    Science.gov (United States)

    Georgoulis, Manolis K.; Tziotziou, Konstantinos; Themelis, Konstantinos; Magiati, Margarita; Angelopoulou, Georgia

    2016-07-01

    We attempt a synoptical overview of the scientific origins of the Athens Effective Solar Flare Forecasting (A-EFFort) utility and the actions taken toward transitioning it into a pre-operational service of ESA's Space Situational Awareness (SSA) Programme. The preferred method for solar flare prediction, as well as key efforts to make it function in a fully automated environment by coupling calculations with near-realtime data-downloading protocols (from the Solar Dynamics Observatory [SDO] mission), pattern recognition (solar active-region identification) and optimization (magnetic connectivity by simulated annealing) will be highlighted. In addition, the entire validation process of the service will be described, with its results presented. We will conclude by stressing the need for across-the-board efforts and synergistic work in order to bring science of potentially limited/restricted interest into realizing a much broader impact and serving the best public interests. The above presentation was partially supported by the ESA/SSA SWE A-EFFort project, ESA Contract No. 4000111994/14/D/MRP. Special thanks go to the ESA Project Officers R. Keil, A. Glover, and J.-P. Luntama (ESOC), M. Bobra and C. Balmer of the SDO/HMI team at Stanford University, and M. Zoulias at the RCAAM of the Academy of Athens for valuable technical help.

  3. Supply Chain Coordination under Trade Credit and Quantity Discount with Sales Effort Effects

    Directory of Open Access Journals (Sweden)

    Zhihong Wang

    2018-01-01

    Full Text Available The purpose of this paper is to investigate the role of trade credit and quantity discount in supply chain coordination when the sales effort effect on market demand is considered. In this paper, we consider a two-echelon supply chain consisting of a single retailer ordering a single product from a single manufacturer. Market demand is stochastic and is influenced by retailer sales effort. We formulate an analytical model based on a single trade credit and find that the single trade credit cannot achieve the perfect coordination of the supply chain. Then, we develop a hybrid quantitative analytical model for supply chain coordination by coherently integrating incentives of trade credit and quantity discount with sales effort effects. The results demonstrate that, providing that the discount rate satisfies certain conditions, the proposed hybrid model combining trade credit and quantity discount will be able to effectively coordinate the supply chain by motivating retailers to exert their sales effort and increase product order quantity. Furthermore, the hybrid quantitative analytical model can provide great flexibility in coordinating the supply chain to achieve an optimal situation through the adjustment of relevant parameters to resolve conflict of interests from different supply chain members. Numerical examples are provided to demonstrate the effectiveness of the hybrid model.

  4. SVM-based glioma grading. Optimization by feature reduction analysis

    International Nuclear Information System (INIS)

    Zoellner, Frank G.; Schad, Lothar R.; Emblem, Kyrre E.; Harvard Medical School, Boston, MA; Oslo Univ. Hospital

    2012-01-01

    We investigated the predictive power of feature reduction analysis approaches in support vector machine (SVM)-based classification of glioma grade. In 101 untreated glioma patients, three analytic approaches were evaluated to derive an optimal reduction in features; (i) Pearson's correlation coefficients (PCC), (ii) principal component analysis (PCA) and (iii) independent component analysis (ICA). Tumor grading was performed using a previously reported SVM approach including whole-tumor cerebral blood volume (CBV) histograms and patient age. Best classification accuracy was found using PCA at 85% (sensitivity = 89%, specificity = 84%) when reducing the feature vector from 101 (100-bins rCBV histogram + age) to 3 principal components. In comparison, classification accuracy by PCC was 82% (89%, 77%, 2 dimensions) and 79% by ICA (87%, 75%, 9 dimensions). For improved speed (up to 30%) and simplicity, feature reduction by all three methods provided similar classification accuracy to literature values (∝87%) while reducing the number of features by up to 98%. (orig.)

  5. SVM-based glioma grading. Optimization by feature reduction analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zoellner, Frank G.; Schad, Lothar R. [University Medical Center Mannheim, Heidelberg Univ., Mannheim (Germany). Computer Assisted Clinical Medicine; Emblem, Kyrre E. [Massachusetts General Hospital, Charlestown, A.A. Martinos Center for Biomedical Imaging, Boston MA (United States). Dept. of Radiology; Harvard Medical School, Boston, MA (United States); Oslo Univ. Hospital (Norway). The Intervention Center

    2012-11-01

    We investigated the predictive power of feature reduction analysis approaches in support vector machine (SVM)-based classification of glioma grade. In 101 untreated glioma patients, three analytic approaches were evaluated to derive an optimal reduction in features; (i) Pearson's correlation coefficients (PCC), (ii) principal component analysis (PCA) and (iii) independent component analysis (ICA). Tumor grading was performed using a previously reported SVM approach including whole-tumor cerebral blood volume (CBV) histograms and patient age. Best classification accuracy was found using PCA at 85% (sensitivity = 89%, specificity = 84%) when reducing the feature vector from 101 (100-bins rCBV histogram + age) to 3 principal components. In comparison, classification accuracy by PCC was 82% (89%, 77%, 2 dimensions) and 79% by ICA (87%, 75%, 9 dimensions). For improved speed (up to 30%) and simplicity, feature reduction by all three methods provided similar classification accuracy to literature values ({proportional_to}87%) while reducing the number of features by up to 98%. (orig.)

  6. LTE, WiMAX and WLAN network design, optimization and performance analysis

    CERN Document Server

    Korowajczuk, Leonhard

    2011-01-01

    A technological overview of LTE and WiMAX LTE, WiMAX and WLAN Network Design, Optimization and Performance Analysis provides a practical guide to LTE and WiMAX technologies introducing various tools and concepts used within. In addition, topics such as traffic modelling of IP-centric networks, RF propagation, fading, mobility, and indoor coverage are explored; new techniques which increase throughput such as MIMO and AAS technology are highlighted; and simulation, network design and performance analysis are also examined. Finally, in the latter part of the book Korowajczuk gives a step-by-step

  7. Adults with autism spectrum disorders exhibit decreased sensitivity to reward parameters when making effort-based decisions

    Directory of Open Access Journals (Sweden)

    Damiano Cara R

    2012-05-01

    Full Text Available Abstract Background Efficient effort expenditure to obtain rewards is critical for optimal goal-directed behavior and learning. Clinical observation suggests that individuals with autism spectrum disorders (ASD may show dysregulated reward-based effort expenditure, but no behavioral study to date has assessed effort-based decision-making in ASD. Methods The current study compared a group of adults with ASD to a group of typically developing adults on the Effort Expenditure for Rewards Task (EEfRT, a behavioral measure of effort-based decision-making. In this task, participants were provided with the probability of receiving a monetary reward on a particular trial and asked to choose between either an “easy task” (less motoric effort for a small, stable reward or a “hard task” (greater motoric effort for a variable but consistently larger reward. Results Participants with ASD chose the hard task more frequently than did the control group, yet were less influenced by differences in reward value and probability than the control group. Additionally, effort-based decision-making was related to repetitive behavior symptoms across both groups. Conclusions These results suggest that individuals with ASD may be more willing to expend effort to obtain a monetary reward regardless of the reward contingencies. More broadly, results suggest that behavioral choices may be less influenced by information about reward contingencies in individuals with ASD. This atypical pattern of effort-based decision-making may be relevant for understanding the heightened reward motivation for circumscribed interests in ASD.

  8. A hybrid finite element analysis and evolutionary computation method for the design of lightweight lattice components with optimized strut diameter

    DEFF Research Database (Denmark)

    Salonitis, Konstantinos; Chantzis, Dimitrios; Kappatos, Vasileios

    2017-01-01

    approaches or with the use of topology optimization methodologies. An optimization approach utilizing multipurpose optimization algorithms has not been proposed yet. This paper presents a novel user-friendly method for the design optimization of lattice components towards weight minimization, which combines...... finite element analysis and evolutionary computation. The proposed method utilizes the cell homogenization technique in order to reduce the computational cost of the finite element analysis and a genetic algorithm in order to search for the most lightweight lattice configuration. A bracket consisting...

  9. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jakeman, John Davis [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stephens, John Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vigil, Dena M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wildey, Timothy Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bohnhoff, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hu, Kenneth T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dalbey, Keith R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bauman, Lara E [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hough, Patricia Diane [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-05-01

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.

  10. Cognitive effort: A neuroeconomic approach

    Science.gov (United States)

    Braver, Todd S.

    2015-01-01

    Cognitive effort has been implicated in numerous theories regarding normal and aberrant behavior and the physiological response to engagement with demanding tasks. Yet, despite broad interest, no unifying, operational definition of cognitive effort itself has been proposed. Here, we argue that the most intuitive and epistemologically valuable treatment is in terms of effort-based decision-making, and advocate a neuroeconomics-focused research strategy. We first outline psychological and neuroscientific theories of cognitive effort. Then we describe the benefits of a neuroeconomic research strategy, highlighting how it affords greater inferential traction than do traditional markers of cognitive effort, including self-reports and physiologic markers of autonomic arousal. Finally, we sketch a future series of studies that can leverage the full potential of the neuroeconomic approach toward understanding the cognitive and neural mechanisms that give rise to phenomenal, subjective cognitive effort. PMID:25673005

  11. Combined Structural Optimization and Aeroelastic Analysis of a Vertical Axis Wind Turbine

    DEFF Research Database (Denmark)

    Roscher, Björn; Ferreira, Carlos Simao; Bernhammer, Lars O.

    2015-01-01

    Floating offshore wind energy poses challenges on the turbine design. A possible solution is vertical axis wind turbines, which are possibly easier to scale-up and require less components (lower maintenance) and a smaller floating structure than horizontal axis wind turbines. This paper presents...... a structural optimization and aeroelastic analysis of an optimized Troposkein vertical axis wind turbine to minimize the relation between the rotor mass and the swept area. The aeroelastic behavior of the different designs has been analyzed using a modified version of the HAWC2 code with the Actuator Cylinder...... model to compute the aerodynamics of the vertical axis wind turbine. The combined shape and topology optimization of a vertical axis wind turbine show a minimum mass to area ratio of 1.82 kg/m2 for blades with varying blade sections from a NACA 0040 at the attachment points to a NACA 0015...

  12. An optimal big data workflow for biomedical image analysis

    Directory of Open Access Journals (Sweden)

    Aurelle Tchagna Kouanou

    Full Text Available Background and objective: In the medical field, data volume is increasingly growing, and traditional methods cannot manage it efficiently. In biomedical computation, the continuous challenges are: management, analysis, and storage of the biomedical data. Nowadays, big data technology plays a significant role in the management, organization, and analysis of data, using machine learning and artificial intelligence techniques. It also allows a quick access to data using the NoSQL database. Thus, big data technologies include new frameworks to process medical data in a manner similar to biomedical images. It becomes very important to develop methods and/or architectures based on big data technologies, for a complete processing of biomedical image data. Method: This paper describes big data analytics for biomedical images, shows examples reported in the literature, briefly discusses new methods used in processing, and offers conclusions. We argue for adapting and extending related work methods in the field of big data software, using Hadoop and Spark frameworks. These provide an optimal and efficient architecture for biomedical image analysis. This paper thus gives a broad overview of big data analytics to automate biomedical image diagnosis. A workflow with optimal methods and algorithm for each step is proposed. Results: Two architectures for image classification are suggested. We use the Hadoop framework to design the first, and the Spark framework for the second. The proposed Spark architecture allows us to develop appropriate and efficient methods to leverage a large number of images for classification, which can be customized with respect to each other. Conclusions: The proposed architectures are more complete, easier, and are adaptable in all of the steps from conception. The obtained Spark architecture is the most complete, because it facilitates the implementation of algorithms with its embedded libraries. Keywords: Biomedical images, Big

  13. Legal Analysis of Coal Mining in Efforts to Maintain The Environmental Sustainability

    Directory of Open Access Journals (Sweden)

    Iwan Irawan

    2016-07-01

    Full Text Available The goal of this article was to suggest the government to make the appropriate laws and policies in order to optimize the utilization of coal based on environmental sustainability. The research applied library research from several research results and the Act no. 4 of 2009. Data were analyzed qualitatively by the way of decomposition, connecting with the rules, and the legal experts’ opinion. It can be concluded that investors are not optimal in managing and conserving the coal mining and the government has not standaridized the environmental management. 

  14. A highly optimized grid deployment: the metagenomic analysis example.

    Science.gov (United States)

    Aparicio, Gabriel; Blanquer, Ignacio; Hernández, Vicente

    2008-01-01

    Computational resources and computationally expensive processes are two topics that are not growing at the same ratio. The availability of large amounts of computing resources in Grid infrastructures does not mean that efficiency is not an important issue. It is necessary to analyze the whole process to improve partitioning and submission schemas, especially in the most critical experiments. This is the case of metagenomic analysis, and this text shows the work done in order to optimize a Grid deployment, which has led to a reduction of the response time and the failure rates. Metagenomic studies aim at processing samples of multiple specimens to extract the genes and proteins that belong to the different species. In many cases, the sequencing of the DNA of many microorganisms is hindered by the impossibility of growing significant samples of isolated specimens. Many bacteria cannot survive alone, and require the interaction with other organisms. In such cases, the information of the DNA available belongs to different kinds of organisms. One important stage in Metagenomic analysis consists on the extraction of fragments followed by the comparison and analysis of their function stage. By the comparison to existing chains, whose function is well known, fragments can be classified. This process is computationally intensive and requires of several iterations of alignment and phylogeny classification steps. Source samples reach several millions of sequences, which could reach up to thousands of nucleotides each. These sequences are compared to a selected part of the "Non-redundant" database which only implies the information from eukaryotic species. From this first analysis, a refining process is performed and alignment analysis is restarted from the results. This process implies several CPU years. The article describes and analyzes the difficulties to fragment, automate and check the above operations in current Grid production environments. This environment has been

  15. Optimization of radiation protection in nuclear power plants in Italy

    International Nuclear Information System (INIS)

    Benassai, S.; Bramati, L.

    1984-01-01

    There are some reasons to think that actually the cost- benefit analysis cannot be broadly used as optimization procedure in the stage of design for NPP. First of all, an agreement is not yet achieved on the possibility (also with reference to social and political considerations) of assigning a monetary value to the manSv. In addition it is then believed that the feasibility of a cost-benefit analysis, due to the present uncertainties on the various components of the cost (i.e. the costs of health detriment associated with production and installation of protective means and equipments), can perhaps be demonstrated for very simple cases, but not for the NPP as a whole. With regard to this point it is important to note how the input data, often assumed from a cautious standpoint, can dramatically influence the results. Other problems arise from the fact that until now proposed cost-benefit calculations generally refer to routine discharge of radioactive effluents or to shielding related to normal operating conditions, while a major concern is now related to the radiological consequences of accidents. By this way it is important to note also that, also from the economical point of view, the major efforts are concentrated on safety-related systems, in order to reduce the probability of events which can lead on catastrophic consequences. On these bases we prefer to implement optimization procedures in design stage making reference to past experience and to evolution of technology, and to concentrate new efforts on the operating period, when working procedures can produce more effective reduction of radiation exposure. (author)

  16. Optimization of power system operation

    CERN Document Server

    Zhu, Jizhong

    2015-01-01

    This book applies the latest applications of new technologies topower system operation and analysis, including new and importantareas that are not covered in the previous edition. Optimization of Power System Operation covers both traditional andmodern technologies, including power flow analysis, steady-statesecurity region analysis, security constrained economic dispatch,multi-area system economic dispatch, unit commitment, optimal powerflow, smart grid operation, optimal load shed, optimalreconfiguration of distribution network, power system uncertaintyanalysis, power system sensitivity analysis, analytic hierarchicalprocess, neural network, fuzzy theory, genetic algorithm,evolutionary programming, and particle swarm optimization, amongothers. New topics such as the wheeling model, multi-areawheeling, the total transfer capability computation in multipleareas, are also addressed. The new edition of this book continues to provide engineers andac demics with a complete picture of the optimization of techn...

  17. Response efficiency during functional communication training: effects of effort on response allocation.

    OpenAIRE

    Richman, D M; Wacker, D P; Winborn, L

    2001-01-01

    An analogue functional analysis revealed that the problem behavior of a young child with developmental delays was maintained by positive reinforcement. A concurrent-schedule procedure was then used to vary the amount of effort required to emit mands. Results suggested that response effort can be an important variable when developing effective functional communication training programs.

  18. Electron heat transport analysis of low-collisionality plasmas in the neoclassical-transport-optimized configuration of LHD

    International Nuclear Information System (INIS)

    Murakami, Sadayoshi; Yamada, Hiroshi; Wakasa, Arimitsu

    2002-01-01

    Electron heat transport in low-collisionality LHD plasma is investigated in order to study the neoclassical transport optimization effect on thermal plasma transport with an optimization level typical of so-called ''advanced stellarators''. In the central region, a higher electron temperature is obtained in the optimized configuration, and transport analysis suggests the considerable effect of neoclassical transport on the electron heat transport assuming the ion-root level of radial electric field. The obtained experimental results support future reactor design in which the neoclassical and/or anomalous transports are reduced by magnetic field optimization in a non-axisymmetric configuration. (author)

  19. Closed-loop adaptation of neurofeedback based on mental effort facilitates reinforcement learning of brain self-regulation.

    Science.gov (United States)

    Bauer, Robert; Fels, Meike; Royter, Vladislav; Raco, Valerio; Gharabaghi, Alireza

    2016-09-01

    Considering self-rated mental effort during neurofeedback may improve training of brain self-regulation. Twenty-one healthy, right-handed subjects performed kinesthetic motor imagery of opening their left hand, while threshold-based classification of beta-band desynchronization resulted in proprioceptive robotic feedback. The experiment consisted of two blocks in a cross-over design. The participants rated their perceived mental effort nine times per block. In the adaptive block, the threshold was adjusted on the basis of these ratings whereas adjustments were carried out at random in the other block. Electroencephalography was used to examine the cortical activation patterns during the training sessions. The perceived mental effort was correlated with the difficulty threshold of neurofeedback training. Adaptive threshold-setting reduced mental effort and increased the classification accuracy and positive predictive value. This was paralleled by an inter-hemispheric cortical activation pattern in low frequency bands connecting the right frontal and left parietal areas. Optimal balance of mental effort was achieved at thresholds significantly higher than maximum classification accuracy. Rating of mental effort is a feasible approach for effective threshold-adaptation during neurofeedback training. Closed-loop adaptation of the neurofeedback difficulty level facilitates reinforcement learning of brain self-regulation. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  20. Improved helicopter aeromechanical stability analysis using segmented constrained layer damping and hybrid optimization

    Science.gov (United States)

    Liu, Qiang; Chattopadhyay, Aditi

    2000-06-01

    Aeromechanical stability plays a critical role in helicopter design and lead-lag damping is crucial to this design. In this paper, the use of segmented constrained damping layer (SCL) treatment and composite tailoring is investigated for improved rotor aeromechanical stability using formal optimization technique. The principal load-carrying member in the rotor blade is represented by a composite box beam, of arbitrary thickness, with surface bonded SCLs. A comprehensive theory is used to model the smart box beam. A ground resonance analysis model and an air resonance analysis model are implemented in the rotor blade built around the composite box beam with SCLs. The Pitt-Peters dynamic inflow model is used in air resonance analysis under hover condition. A hybrid optimization technique is used to investigate the optimum design of the composite box beam with surface bonded SCLs for improved damping characteristics. Parameters such as stacking sequence of the composite laminates and placement of SCLs are used as design variables. Detailed numerical studies are presented for aeromechanical stability analysis. It is shown that optimum blade design yields significant increase in rotor lead-lag regressive modal damping compared to the initial system.

  1. Economic growth, biodiversity loss and conservation effort.

    Science.gov (United States)

    Dietz, Simon; Adger, W Neil

    2003-05-01

    This paper investigates the relationship between economic growth, biodiversity loss and efforts to conserve biodiversity using a combination of panel and cross section data. If economic growth is a cause of biodiversity loss through habitat transformation and other means, then we would expect an inverse relationship. But if higher levels of income are associated with increasing real demand for biodiversity conservation, then investment to protect remaining diversity should grow and the rate of biodiversity loss should slow with growth. Initially, economic growth and biodiversity loss are examined within the framework of the environmental Kuznets hypothesis. Biodiversity is represented by predicted species richness, generated for tropical terrestrial biodiversity using a species-area relationship. The environmental Kuznets hypothesis is investigated with reference to comparison of fixed and random effects models to allow the relationship to vary for each country. It is concluded that an environmental Kuznets curve between income and rates of loss of habitat and species does not exist in this case. The role of conservation effort in addressing environmental problems is examined through state protection of land and the regulation of trade in endangered species, two important means of biodiversity conservation. This analysis shows that the extent of government environmental policy increases with economic development. We argue that, although the data are problematic, the implications of these models is that conservation effort can only ever result in a partial deceleration of biodiversity decline partly because protected areas serve multiple functions and are not necessarily designated to protect biodiversity. Nevertheless institutional and policy response components of the income biodiversity relationship are important but are not well captured through cross-country regression analysis.

  2. Analysis Balance Parameter of Optimal Ramp metering

    Science.gov (United States)

    Li, Y.; Duan, N.; Yang, X.

    2018-05-01

    Ramp metering is a motorway control method to avoid onset congestion through limiting the access of ramp inflows into the main road of the motorway. The optimization model of ramp metering is developed based upon cell transmission model (CTM). With the piecewise linear structure of CTM, the corresponding motorway traffic optimization problem can be formulated as a linear programming (LP) problem. It is known that LP problem can be solved by established solution algorithms such as SIMPLEX or interior-point methods for the global optimal solution. The commercial software (CPLEX) is adopted in this study to solve the LP problem within reasonable computational time. The concept is illustrated through a case study of the United Kingdom M25 Motorway. The optimal solution provides useful insights and guidances on how to manage motorway traffic in order to maximize the corresponding efficiency.

  3. Fishing effects in northeast Atlantic shelf seas : patterns in fishing effort, diversity and community structure. III. International trawling effort in the North Sea : an analysis of spatial and temporal trends

    DEFF Research Database (Denmark)

    Jennings, S.; Alsväg, J.; Cotter, A.J.R.

    1999-01-01

    of beam trawling effort increases from north to south. Plots of annual fishing effort by ICES statistical rectangle (211 boxes of 0.5 degrees latitude x 1 degrees longitude) indicate that the majority of fishing effort in the North Sea are concentrated in a very few rectangles. Thus mean annual total...

  4. Effort rights-based management

    DEFF Research Database (Denmark)

    Squires, Dale; Maunder, Mark; Allen, Robin

    2017-01-01

    Effort rights-based fisheries management (RBM) is less widely used than catch rights, whether for groups or individuals. Because RBM on catch or effort necessarily requires a total allowable catch (TAC) or total allowable effort (TAE), RBM is discussed in conjunction with issues in assessing fish...... populations and providing TACs or TAEs. Both approaches have advantages and disadvantages, and there are trade-offs between the two approaches. In a narrow economic sense, catch rights are superior because of the type of incentives created, but once the costs of research to improve stock assessments...

  5. Optimal Resource Management in a Stochastic Schaefer Model

    OpenAIRE

    Richard Hartman

    2008-01-01

    This paper incorporates uncertainty into the growth function of the Schaefer model for the optimal management of a biological resource. There is a critical value for the biological stock, and it is optimal to do no harvesting if the biological stock is below that critical value and to exert whatever harvesting effort is necessary to prevent the stock from rising above that critical value. The introduction of uncertainty increases the critical value of the stock.

  6. Infiltration route analysis using thermal observation devices (TOD) and optimization techniques in a GIS environment.

    Science.gov (United States)

    Bang, Soonam; Heo, Joon; Han, Soohee; Sohn, Hong-Gyoo

    2010-01-01

    Infiltration-route analysis is a military application of geospatial information system (GIS) technology. In order to find susceptible routes, optimal-path-searching algorithms are applied to minimize the cost function, which is the summed result of detection probability. The cost function was determined according to the thermal observation device (TOD) detection probability, the viewshed analysis results, and two feature layers extracted from the vector product interim terrain data. The detection probability is computed and recorded for an individual cell (50 m × 50 m), and the optimal infiltration routes are determined with A* algorithm by minimizing the summed costs on the routes from a start point to an end point. In the present study, in order to simulate the dynamic nature of a real-world problem, one thousand cost surfaces in the GIS environment were generated with randomly located TODs and randomly selected infiltration start points. Accordingly, one thousand sets of vulnerable routes for infiltration purposes could be found, which could be accumulated and presented as an infiltration vulnerability map. This application can be further utilized for both optimal infiltration routing and surveillance network design. Indeed, dynamic simulation in the GIS environment is considered to be a powerful and practical solution for optimization problems. A similar approach can be applied to the dynamic optimal routing for civil infrastructure, which requires consideration of terrain-related constraints and cost functions.

  7. Impact analysis of the spacer grid assembly and shape optimization of the attached spring

    Energy Technology Data Exchange (ETDEWEB)

    Park, K. J.; Lee, Z. N. [Hanyang University, Seoul (Korea)

    2002-04-01

    Spacer grids support fuel rods and maintain geometry from external impact loads. A simulation is performed for the strength of a spacer grid under the impact load. The critical impact load that leads to plastic deformation is identified by a free-fall test. A finite element model is established for the nonlinear simulation of the impact process. The simulation model is tuned based on the free-fall test. The model considers the aspects of welding and the contacts between components. Nonlinear finite element analysis is carried out using a software system called ABAQUS/EXPLICIT. The results are discussed from a design viewpoint. Design requirements are defined and a design process is established. The design process includes mathematical optimization as well as practical design method. The shape of the grid spring is designed to maintain its function during the lifetime of the fuel assembly. A structural optimization method is employed for the shape design. A good design is found. Commercial codes are utilized for structural analysis and optimization. 18 refs., 61 figs., 3 tabs. (Author)

  8. A Simulation Method to Find the Optimal Design of Photovoltaic Home System in Malaysia, Case Study: A Building Integrated Photovoltaic in Putra Jaya

    OpenAIRE

    Riza Muhida; Maisarah Ali; Puteri Shireen Jahn Kassim; Muhammad Abu Eusuf; Agus G.E. Sutjipto; Afzeri

    2009-01-01

    Over recent years, the number of building integrated photovoltaic (BIPV) installations for home systems have been increasing in Malaysia. The paper concerns an analysis - as part of current Research and Development (R&D) efforts - to integrate photovoltaics as an architectural feature of a detached house in the new satellite township of Putrajaya, Malaysia. The analysis was undertaken using calculation and simulation tools to optimize performance of BIPV home system. In this study, a the simu...

  9. Structural analysis and optimization procedure of the TFTR device substructure

    International Nuclear Information System (INIS)

    Driesen, G.

    1975-10-01

    A structural evaluation of the TFTR device substructure is performed in order to verify the feasibility of the proposed design concept as well as to establish a design optimization procedure for minimizing the material and fabrication cost of the substructure members. A preliminary evaluation of the seismic capability is also presented. The design concept on which the analysis is based is consistent with that described in the Conceptual Design Status Briefing report dated June 18, 1975

  10. L1-norm kernel discriminant analysis via Bayes error bound optimization for robust feature extraction.

    Science.gov (United States)

    Zheng, Wenming; Lin, Zhouchen; Wang, Haixian

    2014-04-01

    A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.

  11. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    Science.gov (United States)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in

  12. Optimal harvesting policy of a stochastic two-species competitive model with Lévy noise in a polluted environment

    Science.gov (United States)

    Zhao, Yu; Yuan, Sanling

    2017-07-01

    As well known that the sudden environmental shocks and toxicant can affect the population dynamics of fish species, a mechanistic understanding of how sudden environmental change and toxicant influence the optimal harvesting policy requires development. This paper presents the optimal harvesting of a stochastic two-species competitive model with Lévy noise in a polluted environment, where the Lévy noise is used to describe the sudden climate change. Due to the discontinuity of the Lévy noise, the classical optimal harvesting methods based on the explicit solution of the corresponding Fokker-Planck equation are invalid. The object of this paper is to fill up this gap and establish the optimal harvesting policy. By using of aggregation and ergodic methods, the approximation of the optimal harvesting effort and maximum expectation of sustainable yields are obtained. Numerical simulations are carried out to support these theoretical results. Our analysis shows that the Lévy noise and the mean stress measure of toxicant in organism may affect the optimal harvesting policy significantly.

  13. Overview and application of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) toolbox

    Science.gov (United States)

    For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...

  14. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment.

    Science.gov (United States)

    Shakouri, Mahmoud; Lee, Hyun Woo

    2016-03-01

    The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in . The application of these files can be generalized to variety of communities interested in investing on PV systems.

  15. 13C metabolic flux analysis: optimal design of isotopic labeling experiments.

    Science.gov (United States)

    Antoniewicz, Maciek R

    2013-12-01

    Measuring fluxes by 13C metabolic flux analysis (13C-MFA) has become a key activity in chemical and pharmaceutical biotechnology. Optimal design of isotopic labeling experiments is of central importance to 13C-MFA as it determines the precision with which fluxes can be estimated. Traditional methods for selecting isotopic tracers and labeling measurements did not fully utilize the power of 13C-MFA. Recently, new approaches were developed for optimal design of isotopic labeling experiments based on parallel labeling experiments and algorithms for rational selection of tracers. In addition, advanced isotopic labeling measurements were developed based on tandem mass spectrometry. Combined, these approaches can dramatically improve the quality of 13C-MFA results with important applications in metabolic engineering and biotechnology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Parametric analysis and optimization for a combined power and refrigeration cycle

    International Nuclear Information System (INIS)

    Wang Jiangfeng; Dai Yiping; Gao Lin

    2008-01-01

    A combined power and refrigeration cycle is proposed, which combines the Rankine cycle and the absorption refrigeration cycle. This combined cycle uses a binary ammonia-water mixture as the working fluid and produces both power output and refrigeration output simultaneously with only one heat source. A parametric analysis is conducted to evaluate the effects of thermodynamic parameters on the performance of the combined cycle. It is shown that heat source temperature, environment temperature, refrigeration temperature, turbine inlet pressure, turbine inlet temperature, and basic solution ammonia concentration have significant effects on the net power output, refrigeration output and exergy efficiency of the combined cycle. A parameter optimization is achieved by means of genetic algorithm to reach the maximum exergy efficiency. The optimized exergy efficiency is 43.06% under the given condition

  17. A Tale of Three Cities: Piloting a Measure of Effort and Comfort Levels within Town-Gown Relationships

    Science.gov (United States)

    Gavazzi, Stephen M.; Fox, Michael

    2015-01-01

    This article extends the argument that scholarship on marriages and families provides invaluable insights into town-gown relationships. First, a four-square matrix constructed from the twin dimensions of effort and comfort levels is used to describe a typology of campus and community associations. Next the construction of the Optimal College Town…

  18. A Novel Double Cluster and Principal Component Analysis-Based Optimization Method for the Orbit Design of Earth Observation Satellites

    Directory of Open Access Journals (Sweden)

    Yunfeng Dong

    2017-01-01

    Full Text Available The weighted sum and genetic algorithm-based hybrid method (WSGA-based HM, which has been applied to multiobjective orbit optimizations, is negatively influenced by human factors through the artificial choice of the weight coefficients in weighted sum method and the slow convergence of GA. To address these two problems, a cluster and principal component analysis-based optimization method (CPC-based OM is proposed, in which many candidate orbits are gradually randomly generated until the optimal orbit is obtained using a data mining method, that is, cluster analysis based on principal components. Then, the second cluster analysis of the orbital elements is introduced into CPC-based OM to improve the convergence, developing a novel double cluster and principal component analysis-based optimization method (DCPC-based OM. In DCPC-based OM, the cluster analysis based on principal components has the advantage of reducing the human influences, and the cluster analysis based on six orbital elements can reduce the search space to effectively accelerate convergence. The test results from a multiobjective numerical benchmark function and the orbit design results of an Earth observation satellite show that DCPC-based OM converges more efficiently than WSGA-based HM. And DCPC-based OM, to some degree, reduces the influence of human factors presented in WSGA-based HM.

  19. Evaluation of Frameworks for HSCT Design Optimization

    Science.gov (United States)

    Krishnan, Ramki

    1998-01-01

    This report is an evaluation of engineering frameworks that could be used to augment, supplement, or replace the existing FIDO 3.5 (Framework for Interdisciplinary Design and Optimization Version 3.5) framework. The report begins with the motivation for this effort, followed by a description of an "ideal" multidisciplinary design and optimization (MDO) framework. The discussion then turns to how each candidate framework stacks up against this ideal. This report ends with recommendations as to the "best" frameworks that should be down-selected for detailed review.

  20. Analysis and optimization of hybrid excitation permanent magnet synchronous generator for stand-alone power system

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Huijun, E-mail: huijun024@gmail.com [School of Instrumentation Science and Opto-electronics Engineering, Beihang University (China); Qu, Zheng; Tang, Shaofei; Pang, Mingqi [School of Instrumentation Science and Opto-electronics Engineering, Beihang University (China); Zhang, Mingju [Shanghai Aerospace Control Technology Institute, Shanghai (China)

    2017-08-15

    Highlights: • One novel permanent magnet generator structure has been proposed to reduce voltage regulation ratio. • Finite element method and equivalent circuit methods are both employed to realize rapid generator design. • Design of experiment (DOE) method is used to optimize permanent magnet shape for reduce voltage waveform distortion. • The obtained analysis and experiment results verify the proposed design methods. - Abstract: In this paper, electromagnetic design and permanent magnet shape optimization for permanent magnet synchronous generator with hybrid excitation are investigated. Based on generator structure and principle, design outline is presented for obtaining high efficiency and low voltage fluctuation. In order to realize rapid design, equivalent magnetic circuits for permanent magnet and iron poles are developed. At the same time, finite element analysis is employed. Furthermore, by means of design of experiment (DOE) method, permanent magnet is optimized to reduce voltage waveform distortion. Finally, the validity of proposed design methods is validated by the analytical and experimental results.

  1. Analysis and optimization of hybrid excitation permanent magnet synchronous generator for stand-alone power system

    International Nuclear Information System (INIS)

    Wang, Huijun; Qu, Zheng; Tang, Shaofei; Pang, Mingqi; Zhang, Mingju

    2017-01-01

    Highlights: • One novel permanent magnet generator structure has been proposed to reduce voltage regulation ratio. • Finite element method and equivalent circuit methods are both employed to realize rapid generator design. • Design of experiment (DOE) method is used to optimize permanent magnet shape for reduce voltage waveform distortion. • The obtained analysis and experiment results verify the proposed design methods. - Abstract: In this paper, electromagnetic design and permanent magnet shape optimization for permanent magnet synchronous generator with hybrid excitation are investigated. Based on generator structure and principle, design outline is presented for obtaining high efficiency and low voltage fluctuation. In order to realize rapid design, equivalent magnetic circuits for permanent magnet and iron poles are developed. At the same time, finite element analysis is employed. Furthermore, by means of design of experiment (DOE) method, permanent magnet is optimized to reduce voltage waveform distortion. Finally, the validity of proposed design methods is validated by the analytical and experimental results.

  2. Design optimization and analysis of vertical axis wind turbine blade

    International Nuclear Information System (INIS)

    Jarral, A.; Ali, M.; Sahir, M.H.

    2013-01-01

    Wind energy is clean and renwable source of energy and is also the world's fastest growing energy resource. Keeping in view power shortages and growing cost of energy, the low cost wind energy has become a primary solution. It is imperative that economies and individuals begin to conserve energy and focus on the production of energy from renewable sources. Present study describes a wind turbine blade designed with enhanced aerodynamic properties. Vertical axis turbine is chosen because of its easy installment, less noisy and having environmental friendly characteristics. Vertical axis wind turbines are thought to be ideal for installations where wind conditions are not consistent. The presented turbine blade is best suitable for roadsides where the rated speed due to vehicles is most /sup -1/ often 8 ms .To get an optimal shape design symmetrical profile NACA0025 has been considered which is then analyzed for stability and aerodynamic characteristics at optimal conditions using analysis tools ANSYS and CFD tools. (author)

  3. Structural optimization

    CERN Document Server

    MacBain, Keith M

    2009-01-01

    Intends to supplement the engineer's box of analysis and design tools making optimization as commonplace as the finite element method in the engineering workplace. This title introduces structural optimization and the methods of nonlinear programming such as Lagrange multipliers, Kuhn-Tucker conditions, and calculus of variations.

  4. Time regimes optimization of the activation-measurement cycle in neutron activation analysis

    International Nuclear Information System (INIS)

    Szopa, Z.

    1986-01-01

    Criteria of the optimum time conditions of the activation-measurement cycle in neutron activation analysis have been formulated. The optimized functions i.e. the relative precision or the factor of ''merit'' of the analytical signal measured as functions of the cycle time parameters have been proposed. The structure and possibilities of the optimizing programme STOPRC have been presented. This programme is completely written in FORTRAN and takes advantage of the library of standard spectra and fast, stochastic algorithms. The time conditions predicted with the aid of the programme have been discussed and compared with the experimental results for the case of the determination of tungsten in industrial dusts. 31 refs., 4 figs. (author)

  5. Economic Optimization Analysis of Chengdu Electric Community Bus Operation

    Science.gov (United States)

    Yidong, Wang; Yun, Cai; Zhengping, Tan; Xiong, Wan

    2018-03-01

    In recent years, the government has strongly supported and promoted electric vehicles and has given priority to demonstration and popularization in the field of public transport. The economy of public transport operations has drawn increasing attention. In this paper, Chengdu wireless charging pure electric community bus is used as the research object, the battery, air conditioning, driver’s driving behavior and other economic influence factors were analyzed, and optimizing the operation plan through case data analysis, through the reasonable battery matching and mode of operation to help businesses effectively save operating costs and enhance economic efficiency.

  6. Combined optimal-pathlengths method for near-infrared spectroscopy analysis

    International Nuclear Information System (INIS)

    Liu Rong; Xu Kexin; Lu Yanhui; Sun Huili

    2004-01-01

    Near-infrared (NIR) spectroscopy is a rapid, reagent-less and nondestructive analytical technique, which is being increasingly employed for quantitative application in chemistry, pharmaceutics and food industry, and for the optical analysis of biological tissue. The performance of NIR technology greatly depends on the abilities to control and acquire data from the instrument and to calibrate and analyse data. Optical pathlength is a key parameter of the NIR instrument, which has been thoroughly discussed in univariate quantitative analysis in the presence of photometric errors. Although multiple wavelengths can provide more chemical information, it is difficult to determine a single pathlength that is suitable for each wavelength region. A theoretical investigation of a selection procedure for multiple pathlengths, called the combined optimal-pathlengths (COP) method, is identified in this paper and an extensive comparison with the single pathlength method is also performed on simulated and experimental NIR spectral data sets. The results obtained show that the COP method can greatly improve the prediction accuracy in NIR spectroscopy quantitative analysis

  7. Optimization of XRFS for the analysis of toxic elements and heavy metals in coffee products

    International Nuclear Information System (INIS)

    Orlic, I.; Makjanic, J.; Valkovic, V.

    1986-01-01

    X-ray fluorescence spectroscopy can be successfully used for routine on-line analysis of different agricultural products, e.g. for food quality control. The optimization of the system for such purposes and the results obtained are shown on the example of the analysis of coffee. (author)

  8. City Logistics Modeling Efforts : Trends and Gaps - A Review

    NARCIS (Netherlands)

    Anand, N.R.; Quak, H.J.; Van Duin, J.H.R.; Tavasszy, L.A.

    2012-01-01

    In this paper, we present a review of city logistics modeling efforts reported in the literature for urban freight analysis. The review framework takes into account the diversity and complexity found in the present-day city logistics practice. Next, it covers the different aspects in the modeling

  9. Adaptive finite element method for shape optimization

    KAUST Repository

    Morin, Pedro; Nochetto, Ricardo H.; Pauletti, Miguel S.; Verani, Marco

    2012-01-01

    We examine shape optimization problems in the context of inexact sequential quadratic programming. Inexactness is a consequence of using adaptive finite element methods (AFEM) to approximate the state and adjoint equations (via the dual weighted residual method), update the boundary, and compute the geometric functional. We present a novel algorithm that equidistributes the errors due to shape optimization and discretization, thereby leading to coarse resolution in the early stages and fine resolution upon convergence, and thus optimizing the computational effort. We discuss the ability of the algorithm to detect whether or not geometric singularities such as corners are genuine to the problem or simply due to lack of resolution - a new paradigm in adaptivity. © EDP Sciences, SMAI, 2012.

  10. Adaptive finite element method for shape optimization

    KAUST Repository

    Morin, Pedro

    2012-01-16

    We examine shape optimization problems in the context of inexact sequential quadratic programming. Inexactness is a consequence of using adaptive finite element methods (AFEM) to approximate the state and adjoint equations (via the dual weighted residual method), update the boundary, and compute the geometric functional. We present a novel algorithm that equidistributes the errors due to shape optimization and discretization, thereby leading to coarse resolution in the early stages and fine resolution upon convergence, and thus optimizing the computational effort. We discuss the ability of the algorithm to detect whether or not geometric singularities such as corners are genuine to the problem or simply due to lack of resolution - a new paradigm in adaptivity. © EDP Sciences, SMAI, 2012.

  11. Spectral Quantitative Analysis Model with Combining Wavelength Selection and Topology Structure Optimization

    Directory of Open Access Journals (Sweden)

    Qian Wang

    2016-01-01

    Full Text Available Spectroscopy is an efficient and widely used quantitative analysis method. In this paper, a spectral quantitative analysis model with combining wavelength selection and topology structure optimization is proposed. For the proposed method, backpropagation neural network is adopted for building the component prediction model, and the simultaneousness optimization of the wavelength selection and the topology structure of neural network is realized by nonlinear adaptive evolutionary programming (NAEP. The hybrid chromosome in binary scheme of NAEP has three parts. The first part represents the topology structure of neural network, the second part represents the selection of wavelengths in the spectral data, and the third part represents the parameters of mutation of NAEP. Two real flue gas datasets are used in the experiments. In order to present the effectiveness of the methods, the partial least squares with full spectrum, the partial least squares combined with genetic algorithm, the uninformative variable elimination method, the backpropagation neural network with full spectrum, the backpropagation neural network combined with genetic algorithm, and the proposed method are performed for building the component prediction model. Experimental results verify that the proposed method has the ability to predict more accurately and robustly as a practical spectral analysis tool.

  12. Computed Tomographic Analysis of Ventral Atlantoaxial Optimal Safe Implantation Corridors in 27 Dogs.

    Science.gov (United States)

    Leblond, Guillaume; Gaitero, Luis; Moens, Noel M M; Zur Linden, Alex; James, Fiona M K; Monteith, Gabrielle J; Runciman, John

    2017-11-01

    Objectives  Ventral atlantoaxial stabilization techniques are challenging surgical procedures in dogs. Available surgical guidelines are based upon subjective anatomical landmarks, and limited radiographic and computed tomographic data. The aims of this study were (1) to provide detailed anatomical descriptions of atlantoaxial optimal safe implantation corridors to generate objective recommendations for optimal implant placements and (2) to compare anatomical data obtained in non-affected Toy breed dogs, affected Toy breed dogs suffering from atlantoaxial instability and non-affected Beagle dogs. Methods  Anatomical data were collected from a prospectively recruited population of 27 dogs using a previously validated method of optimal safe implantation corridor analysis using computed tomographic images. Results  Optimal implant positions and three-dimensional numerical data were generated successfully in all cases. Anatomical landmarks could be used to generate objective definitions of optimal insertion points which were applicable across all three groups. Overall the geometrical distribution of all implant sites was similar in all three groups with a few exceptions. Clinical Significance  This study provides extensive anatomical data available to facilitate surgical planning of implant placement for atlantoaxial stabilization. Our data suggest that non-affected Toy breed dogs and non-affected Beagle dogs constitute reasonable research models to study atlantoaxial stabilization constructs. Schattauer GmbH Stuttgart.

  13. Optimization Analysis of Supply Chain Resource Allocation in Customized Online Shopping Service Mode

    Directory of Open Access Journals (Sweden)

    Jianming Yao

    2015-01-01

    Full Text Available For an online-shopping company, whether it can provide its customers with customized service is the key to enhance its customers’ experience value and its own competence. A good customized service requires effective integration and reasonable allocation of the company’s supply chain resources running in the background. Based on the analysis of the allocation of supply chain resources in the customized online shopping service mode and its operational characteristics, this paper puts forward an optimization model for the resource allocation and builds an improved ant algorithm to solve it. Finally, the effectiveness and feasibility of the optimization method and algorithm are demonstrated by a numerical simulation. This paper finds that the special online shopping environments lead to many dynamic and uncertain characters of the service demands. Different customized service patterns and their combination patterns should match with different supply chain resource allocations. The optimization model not only reflects the required service cost and delivery time in the objective function, but also considers the service scale effect optimization and the relations of integration benefits and risks. The improved ant algorithm has obvious advantages in flexibly balancing the multiobjective optimizations, adjusting the convergence speed, and adjusting the operation parameters.

  14. Application of genetic programming in shape optimization of concrete gravity dams by metaheuristics

    Directory of Open Access Journals (Sweden)

    Abdolhossein Baghlani

    2014-12-01

    Full Text Available A gravity dam maintains its stability against the external loads by its massive size. Hence, minimization of the weight of the dam can remarkably reduce the construction costs. In this paper, a procedure for finding optimal shape of concrete gravity dams with a computationally efficient approach is introduced. Genetic programming (GP in conjunction with metaheuristics is used for this purpose. As a case study, shape optimization of the Bluestone dam is presented. Pseudo-dynamic analysis is carried out on a total number of 322 models in order to establish a database of the results. This database is then used to find appropriate relations based on GP for design criteria of the dam. This procedure eliminates the necessity of the time-consuming process of structural analyses in evolutionary optimization methods. The method is hybridized with three different metaheuristics, including particle swarm optimization, firefly algorithm (FA, and teaching–learning-based optimization, and a comparison is made. The results show that although all algorithms are very suitable, FA is slightly superior to other two algorithms in finding a lighter structure in less number of iterations. The proposed method reduces the weight of dam up to 14.6% with very low computational effort.

  15. Optimal Life-Cycle Investing with Flexible Labor Supply: A Welfare Analysis of Life-Cycle Funds

    OpenAIRE

    Francisco J. Gomes; Laurence J. Kotlikoff; Luis M. Viceira

    2008-01-01

    We investigate optimal consumption, asset accumulation and portfolio decisions in a realistically calibrated life-cycle model with flexible labor supply. Our framework allows for wage rate uncertainly, variable labor supply, social security benefits and portfolio choice over safe bonds and risky equities. Our analysis reinforces prior findings that equities are the preferred asset for young households, with the optimal share of equities generally declining prior to retirement. However, variab...

  16. Lead optimization attrition analysis (LOAA): a novel and general methodology for medicinal chemistry.

    Science.gov (United States)

    Munson, Mark; Lieberman, Harvey; Tserlin, Elina; Rocnik, Jennifer; Ge, Jie; Fitzgerald, Maria; Patel, Vinod; Garcia-Echeverria, Carlos

    2015-08-01

    Herein, we report a novel and general method, lead optimization attrition analysis (LOAA), to benchmark two distinct small-molecule lead series using a relatively unbiased, simple technique and commercially available software. We illustrate this approach with data collected during lead optimization of two independent oncology programs as a case study. Easily generated graphics and attrition curves enabled us to calibrate progress and support go/no go decisions on each program. We believe that this data-driven technique could be used broadly by medicinal chemists and management to guide strategic decisions during drug discovery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Meeting the inventory optimization challenges of the 1990s

    International Nuclear Information System (INIS)

    Jones, L.K. IV

    1992-01-01

    This paper outlines some of the efforts being taken by the new Southern Nuclear Operating Company to augment the inventory optimization efforts of the parent Southern Company. Southern Nuclear Operating Company's management is undertaking a multifaceted program to enhance their inventory practices. Key elements of this program include improved performance reporting, procurement support, and material identification. These elements will enable Southern Nuclear to meet inventory management challenges dynamically in the 1990s

  18. Optimal allocation of International Atomic Energy Agency inspection resources

    International Nuclear Information System (INIS)

    Markin, J.T.

    1987-12-01

    The Safeguards Department of the International Atomic Energy Agency (IAEA) conducts inspections to assure the peaceful use of a state's nuclear materials and facilities. Because of limited resources for conducting inspections, the careful disposition of inspection effort among these facilities is essential if the IAEA is to attain its safeguards goals. This report describes an optimization procedure for assigning an inspection effort to maximize attainment of IAEA goals. The procedure does not require quantitative estimates of safeguards effectiveness, material value, or facility importance. Instead, the optimization is based on qualitative, relative prioritizations of inspection activities and materials to be safeguarded. This allocation framework is applicable to an arbitrary group of facilities such as a state's fuel cycle, the facilities inspected by an operations division, or all of the facilities inspected by the IAEA

  19. Exergoeconomic analysis and optimization of a model cogeneration system; Analise exergoeconomica e otimizacao de um modelo de sistema de cogeracao

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Leonardo S.R. [Centro de Pesquisas de Energia Eletrica, Rio de Janeiro, RJ (Brazil). Area de Conhecimento de Materiais e Mecanica]. E-mail: lsrv@cepel.br; Donatelli, Joao L.M. [Espirito Santo Univ., Vitoria, ES (Brazil). Dept. de Engenharia Mecanica]. E-mail: donatelli@lttc.com.ufrj.br; Cruz, Manuel E.C. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Dept. de Engenharia Mecanica]. E-mail: manuel@serv.com.ufrj.br

    2000-07-01

    In this paper we perform exergetic and exergoeconomic analyses, a mathematical optimization and an exergoeconomic optimization of a gas turbine-heat recovery boiler cogeneration system with fixed electricity and steam production rates. The exergy balance is calculated with the IPSE pro thermal system simulation program. In the exergetic analysis, exergy destruction rates, exergetic efficiencies and structural bond coefficients for each component are evaluated as functions of the decision variables of the optimization problem. In the exergoeconomic analysis the cost for each exergetic flow is determined through cost balance equations and additional auxiliary equations from cost partition criteria. Mathematical optimization is performed by the metric variable method (software EES - Engineering Equation Solver) and by the successive quadratic programming (IMSL library - Fortran Power Station). The exergoeconomic optimization is performed on the basis of the exergoeconomic variables. System optimization is also performed by evaluating the derivative of the objective function through finite differences. This paper concludes with a comparison between the four optimization techniques employed. (author)

  20. Optimization of a Lattice Boltzmann Computation on State-of-the-Art Multicore Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2009-04-10

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon E5345 (Clovertown), AMD Opteron 2214 (Santa Rosa), AMD Opteron 2356 (Barcelona), Sun T5140 T2+ (Victoria Falls), as well as a QS20 IBM Cell Blade. Rather than hand-tuning LBMHD for each system, we develop a code generator that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 15x improvement compared with the original code at a given concurrency. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.

  1. Standardization and optimization of arthropod inventories-the case of Iberian spiders

    DEFF Research Database (Denmark)

    Bondoso Cardoso, Pedro Miguel

    2009-01-01

    and optimization of sampling protocols, especially for mega-diverse arthropod taxa. This study had two objectives: (1) propose guidelines and statistical methods to improve the standardization and optimization of arthropod inventories, and (2) to propose a standardized and optimized protocol for Iberian spiders......, by finding common results between the optimal options for the different sites. The steps listed were successfully followed in the determination of a sampling protocol for Iberian spiders. A protocol with three sub-protocols of varying degrees of effort (24, 96 and 320 h of sampling) is proposed. I also...

  2. Analysis of efficiency and marketing trends cost optimization in enterprises of baking branch

    Directory of Open Access Journals (Sweden)

    Lukan О.М.

    2017-06-01

    Full Text Available Today, at the bakery industry, little attention is paid to marketing activities. Limited financial resources and the lack of a comprehensive assessment of the effectiveness of marketing activities leads to a reduction in marketing budgets and a decrease in the profitability of the enterprise as a whole. Therefore, despite the complexity of conducting an analysis of the cost effectiveness of marketing activities, in market conditions it is necessary to control the level of costs and the formation of optimal marketing budgets. In the work it is determined that the main direction of marketing activity evaluation is the analysis of the cost effectiveness for its implementation. A scientific-methodical approach to the analysis of the effectiveness of marketing costs in the bakery industry is suggested. The analysis of the cost effectiveness of marketing activities on the basis of the assumption that marketing costs are a factor variable determining the patterns of changes in the values of the resulting indicators of financial and economic activities of the enterprise, such as net income from sales of products, gross profit, financial results from operating activities and net profit (losses. The main directions of optimization of marketing activities at bakery enterprises are given.

  3. Optimization of a truck-drone in tandem delivery network using k-means and genetic algorithm

    Directory of Open Access Journals (Sweden)

    Sergio Mourelo Ferrandez

    2016-04-01

    Full Text Available Purpose: The purpose of this paper is to investigate the effectiveness of implementing unmanned aerial delivery vehicles in delivery networks. We investigate the notion of the reduced overall delivery time, energy, and costs for a truck-drone network by comparing the in-tandem system with a stand-alone delivery effort. The objectives are (1 to investigate the time, energy, and costs associated to a truck-drone delivery network compared to standalone truck or drone, (2 to propose an optimization algorithm that determines the optimal number of launch sites and locations given delivery requirements, and drones per truck, (3 to develop mathematical formulations for closed form estimations for the optimal number of launch locations, optimal total time, as well as the associated cost for the system. Design/methodology/approach: The design of the algorithm herein computes the minimal time of delivery utilizing K-means clustering to find launch locations, as well as a genetic algorithm to solve the truck route as a traveling salesmen problem (TSP. The optimal solution is determined by finding the minimum cost associated to the parabolic convex cost function. The optimal min-cost is determined by finding the most efficient launch locations using K-means algorithms to determine launch locations and a genetic algorithm to determine truck route between those launch locations.  Findings: Results show improvements with in-tandem delivery efforts as opposed to standalone systems. Further, multiple drones per truck are more optimal and contribute to savings in both energy and time. For this, we sampled various initialization variables to derive closed form mathematical solutions for the problem. Originality/value: Ultimately, this provides the necessary analysis of an integrated truck-drone delivery system which could be implemented by a company in order to maximize deliveries while minimizing time and energy. Closed-form mathematical solutions can be used as

  4. Long term performance degradation analysis and optimization of anode supported solid oxide fuel cell stacks

    International Nuclear Information System (INIS)

    Parhizkar, Tarannom; Roshandel, Ramin

    2017-01-01

    Highlights: • A degradation based optimization framework is developed. • The cost of electricity based on degradation of solid oxide fuel cells is minimized. • The effects of operating conditions on degradation mechanisms are investigated. • Results show 7.12% lower cost of electricity in comparison with base case. • Degradation based optimization is a beneficial concept for long term analysis. - Abstract: The main objective of this work is minimizing the cost of electricity of solid oxide fuel cell stacks by decelerating degradation mechanisms rate in long term operation for stationary power generation applications. The degradation mechanisms in solid oxide fuel cells are caused by microstructural changes, reactions between lanthanum strontium manganite and electrolyte, poisoning by chromium, carburization on nickel particles, formation of nickel sulfide, nickel coarsening, nickel oxidation, loss of conductivity and crack formation in the electrolyte. The rate of degradation mechanisms depends on the cell operating conditions (cell voltage and fuel utilization). In this study, the degradation based optimization framework is developed which determines optimum operating conditions to achieve a minimum cost of electricity. To show the effectiveness of the developed framework, optimization results are compared with the case that system operates at its design point. Results illustrate optimum operating conditions decrease the cost of electricity by 7.12%. The performed study indicates that degradation based optimization is a beneficial concept for long term performance degradation analysis of energy conversion systems.

  5. Thermodynamic performance analysis and optimization of a solar-assisted combined cooling, heating and power system

    International Nuclear Information System (INIS)

    Wang, Jiangjiang; Lu, Yanchao; Yang, Ying; Mao, Tianzhi

    2016-01-01

    This study aims to present a thermodynamic performance analysis and to optimize the configurations of a hybrid combined cooling, heating and power (CCHP) system incorporating solar energy and natural gas. A basic natural gas CCHP system containing a power generation unit, a heat recovery system, an absorption cooling system and a storage tank is integrated with solar photovoltaic (PV) panels and/or a heat collector. Based on thermodynamic modeling, the thermodynamic performance, including energy and exergy efficiencies, under variable work conditions, such as electric load factor, solar irradiance and installation ratio, of the solar PV panels and heat collector is investigated and analyzed. The results of the energy supply side analysis indicate that the integration of solar PV into the CCHP system more efficiently improves the exergy efficiency, whereas the integration of a solar heat collector improves the energy efficiency. To match the building loads, the optimization method combined with the operation strategy is employed to optimize the system configurations to maximize the integrated benefits of energy and economic costs. The optimization results of demand–supply matching demonstrate that the integration of a solar heat collector achieves a better integrated performance than the solar PV integration in the specific case study. - Highlights: • Design a CCHP system integrated with solar PV and heat collector. • Present the energy and exergy analyses under variable work conditions. • Propose an optimization method of CCHP system for demand-supply matching.

  6. Stop the Bleeding: the Development of a Tool to Streamline NASA Earth Science Metadata Curation Efforts

    Science.gov (United States)

    le Roux, J.; Baker, A.; Caltagirone, S.; Bugbee, K.

    2017-12-01

    The Common Metadata Repository (CMR) is a high-performance, high-quality repository for Earth science metadata records, and serves as the primary way to search NASA's growing 17.5 petabytes of Earth science data holdings. Released in 2015, CMR has the capability to support several different metadata standards already being utilized by NASA's combined network of Earth science data providers, or Distributed Active Archive Centers (DAACs). The Analysis and Review of CMR (ARC) Team located at Marshall Space Flight Center is working to improve the quality of records already in CMR with the goal of making records optimal for search and discovery. This effort entails a combination of automated and manual review, where each NASA record in CMR is checked for completeness, accuracy, and consistency. This effort is highly collaborative in nature, requiring communication and transparency of findings amongst NASA personnel, DAACs, the CMR team and other metadata curation teams. Through the evolution of this project it has become apparent that there is a need to document and report findings, as well as track metadata improvements in a more efficient manner. The ARC team has collaborated with Element 84 in order to develop a metadata curation tool to meet these needs. In this presentation, we will provide an overview of this metadata curation tool and its current capabilities. Challenges and future plans for the tool will also be discussed.

  7. Contribution to optimization of individual doses of workers in shipment of generator technetium-99m

    International Nuclear Information System (INIS)

    Fonseca, Lizandra Pereira de Souza

    2010-01-01

    The Instituto de Pesquisas Energeticas e Nucleares, IPEN, radiopharmaceuticals research and produce that are distributed throughout Brazil, currently the radiopharmaceutical with the largest number of packaged shipped per year and with the highest total activity is the 99m technetium generator. To reduce individual doses for workers involved in the production of radiopharmaceuticals was performed a study of radiological protection optimization in the shipment process of technetium generator, using the techniques: differential cost-benefit analysis, integral cost-benefit analysis, multi-attribute utility analysis and multi-criteria outranking analysis. With changes in the configuration of packed for generator dispatch and with the acquirement of a mat transporter it was possible establish 4 protection options. The attributes considered were the protection cost, collective dose, individual dose and physical effort by worker to move the package without the mat. To assess the robustness of analytical solutions found with the techniques used in the optimization we performed a sensitivity study and found that option 3 is more robust than option 1, which is no longer the analytical solution with an increase of R$ 20.000,00 the cost of protection. (author)

  8. Space-Mapping-Based Interpolation for Engineering Optimization

    DEFF Research Database (Denmark)

    Koziel, Slawomir; Bandler, John W.; Madsen, Kaj

    2006-01-01

    of the fine model at off&8209;grid points and, as a result, increases the effective resolution of the design variable domain search and improves the quality of the fine model solution found by the SM optimization algorithm. The proposed method requires little computational effort; in particular no additional...

  9. Finite Element Multidisciplinary Optimization Simulation of Flight Vehicles, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed effort is concerned with the development of a novel optimization scheme and computer software for the effective design of advanced aerospace vehicles....

  10. Dopamine and Effort-Based Decision Making

    Directory of Open Access Journals (Sweden)

    Irma Triasih Kurniawan

    2011-06-01

    Full Text Available Motivational theories of choice focus on the influence of goal values and strength of reinforcement to explain behavior. By contrast relatively little is known concerning how the cost of an action, such as effort expended, contributes to a decision to act. Effort-based decision making addresses how we make an action choice based on an integration of action and goal values. Here we review behavioral and neurobiological data regarding the representation of effort as action cost, and how this impacts on decision making. Although organisms expend effort to obtain a desired reward there is a striking sensitivity to the amount of effort required, such that the net preference for an action decreases as effort cost increases. We discuss the contribution of the neurotransmitter dopamine (DA towards overcoming response costs and in enhancing an animal’s motivation towards effortful actions. We also consider the contribution of brain structures, including the basal ganglia (BG and anterior cingulate cortex (ACC, in the internal generation of action involving a translation of reward expectation into effortful action.

  11. Optimization planning for the construction of the U.S. EPR

    International Nuclear Information System (INIS)

    Phillips, M. K.

    2008-01-01

    As utilities and project developers in the United States embark on a new nuclear plant construction endeavor, the industry must have the confidence and support of all stakeholders. To gain this confidence and support, cost and schedule certainty must be established. Therein lies the challenge. The owner will insist that constructors and suppliers deliver new plants on time and within project budgets. This will be a significant effort given the limited number of qualified equipment and commodity suppliers, and a shortage of experienced craft and supervision. In an effort to manage construction risk and ensure cost and schedule certainty, the AREVA Construction Management Team has lead the development of a construction strategy that includes advanced construction methods and technologies such as 3-D modeling, detailed planning and scheduling, open-top construction, state-of-the-art fabrication practices, specialty transportation and lifting methods, and pre-assembly and modularization, otherwise referred to as construction optimization. Unlike off-shore platforms, industrial process plants, and ships where piping and components can be integrated into the steel building structure, optimized construction of nuclear plants must consider reinforced concrete walls and slabs that serve as the building structure, and provide radiation shielding and containment of nuclear systems. These differences in design and construction require a significantly different analysis approach that is focused on rooms of a building rather than the whole building or plant. This paper will focus on the process developed and currently being implemented by the U.S. EPR (Evolutionary Power Reactor) Construction Team to identify opportunities for optimization and then to evaluate and select the rooms or areas of the plant where pre-assembly or modularization will provide the greatest benefit to construction cost and schedule certainty. Key measures for comparison and selection are critical path

  12. A hybrid multi-level optimization approach for the dynamic synthesis/design and operation/control under uncertainty of a fuel cell system

    International Nuclear Information System (INIS)

    Kim, Kihyung; Spakovsky, Michael R. von; Wang, M.; Nelson, Douglas J.

    2011-01-01

    During system development, large-scale, complex energy systems require multi-disciplinary efforts to achieve system quality, cost, and performance goals. As systems become larger and more complex, the number of possible system configurations and technologies, which meet the designer's objectives optimally, increases greatly. In addition, both transient and environmental effects may need to be taken into account. Thus, the difficulty of developing the system via the formulation of a single optimization problem in which the optimal synthesis/design and operation/control of the system are achieved simultaneously is great and rather problematic. This difficulty is further heightened with the introduction of uncertainty analysis, which transforms the problem from a purely deterministic one into a probabilistic one. Uncertainties, system complexity and nonlinearity, and large numbers of decision variables quickly render the single optimization problem unsolvable by conventional, single-level, optimization strategies. To address these difficulties, the strategy adopted here combines a dynamic physical decomposition technique for large-scale optimization with a response sensitivity analysis method for quantifying system response uncertainties to given uncertainty sources. The feasibility of such a hybrid approach is established by applying it to the synthesis/design and operation/control of a 5 kW proton exchange membrane (PEM) fuel cell system.

  13. Impact of right-ventricular apical pacing on the optimal left-ventricular lead positions measured by phase analysis of SPECT myocardial perfusion imaging

    International Nuclear Information System (INIS)

    Hung, Guang-Uei; Huang, Jin-Long; Lin, Wan-Yu; Tsai, Shih-Chung; Wang, Kuo-Yang; Chen, Shih-Ann; Lloyd, Michael S.; Chen, Ji

    2014-01-01

    The use of SPECT phase analysis to optimize left-ventricular (LV) lead positions for cardiac resynchronization therapy (CRT) was performed at baseline, but CRT works as simultaneous right ventricular (RV) and LV pacing. The aim of this study was to assess the impact of RV apical (RVA) pacing on optimal LV lead positions measured by SPECT phase analysis. This study prospectively enrolled 46 patients. Two SPECT myocardial perfusion scans were acquired under sinus rhythm with complete left bundle branch block and RVA pacing, respectively, following a single injection of 99m Tc-sestamibi. LV dyssynchrony parameters and optimal LV lead positions were measured by the phase analysis technique and then compared between the two scans. The LV dyssynchrony parameters were significantly larger with RVA pacing than with sinus rhythm (p ∝0.01). In 39 of the 46 patients, the optimal LV lead positions were the same between RVA pacing and sinus rhythm (kappa = 0.861). In 6 of the remaining 7 patients, the optimal LV lead positions were along the same radial direction, but RVA pacing shifted the optimal LV lead positions toward the base. The optimal LV lead positions measured by SPECT phase analysis were consistent, no matter whether the SPECT images were acquired under sinus rhythm or RVA pacing. In some patients, RVA pacing shifted the optimal LV lead positions toward the base. This study supports the use of baseline SPECT myocardial perfusion imaging to optimize LV lead positions to increase CRT efficacy. (orig.)

  14. Analysis of Influence Factors on Extraction Rate of Lutein from Marigold and Optimization of Saponification Conditions

    OpenAIRE

    Wang Xian-Qing; Li Man; Liu Yan-Yan; Liang Ying

    2015-01-01

    After lutein esters extracted by ultrasonic-assisted organic solvent from marigold powder, saponification conditions such as saponification solution concentration, saponification lipuid dosage, saponification temperature and saponification time were optimized by response surface analysis. The results showed that the optimal saponification conditions are saponification solution concentration 10%, saponification lipuid dosage 200 mL, saponification temperature 50°C, saponification time 2 h. Und...

  15. A Quantitative Comparison Between Size, Shape, Topology and Simultaneous Optimization for Truss Structures

    Directory of Open Access Journals (Sweden)

    T.E. Müller

    Full Text Available Abstract There are typically three broad categories of structural optimization namely size, shape and topology. Over the past few decades various researchers have focused on developing techniques for optimizing structures by considering either one or a combination of these aspects. In this paper the efficiency of these techniques are investigated in an effort to quantify the improvement of the result obtained by utilizing a more complex optimization routine. The percentage of the structural weight saved and computational effort required are used as measures to compare these techniques. The well-known genetic algorithm with elitism is used to perform these tests on various benchmark structures found in literature. Some of the results that are obtained include that a simultaneous approach produces, on average, a 22 % better solution than a simple size optimization and a 12 % improvement when compared to a staged approach where the size, shape and topology of the structure is considered sequentially. From these results, it is concluded that a significant saving can be made by using a more complex optimization routine, such as a simultaneous approach.

  16. An optimal control strategies using vaccination and fogging in dengue fever transmission model

    Science.gov (United States)

    Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan

    2017-08-01

    This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.

  17. Sensitivity Analysis and Optimization of the Nuclear Fuel Cycle: A Systematic Approach

    Science.gov (United States)

    Passerini, Stefano

    For decades, nuclear energy development was based on the expectation that recycling of the fissionable materials in the used fuel from today's light water reactors into advanced (fast) reactors would be implemented as soon as technically feasible in order to extend the nuclear fuel resources. More recently, arguments have been made for deployment of fast reactors in order to reduce the amount of higher actinides, hence the longevity of radioactivity, in the materials destined to a geologic repository. The cost of the fast reactors, together with concerns about the proliferation of the technology of extraction of plutonium from used LWR fuel as well as the large investments in construction of reprocessing facilities have been the basis for arguments to defer the introduction of recycling technologies in many countries including the US. In this thesis, the impacts of alternative reactor technologies on the fuel cycle are assessed. Additionally, metrics to characterize the fuel cycles and systematic approaches to using them to optimize the fuel cycle are presented. The fuel cycle options of the 2010 MIT fuel cycle study are re-examined in light of the expected slower rate of growth in nuclear energy today, using the CAFCA (Code for Advanced Fuel Cycle Analysis). The Once Through Cycle (OTC) is considered as the base-line case, while advanced technologies with fuel recycling characterize the alternative fuel cycle options available in the future. The options include limited recycling in L WRs and full recycling in fast reactors and in high conversion LWRs. Fast reactor technologies studied include both oxide and metal fueled reactors. Additional fuel cycle scenarios presented for the first time in this work assume the deployment of innovative recycling reactor technologies such as the Reduced Moderation Boiling Water Reactors and Uranium-235 initiated Fast Reactors. A sensitivity study focused on system and technology parameters of interest has been conducted to test

  18. Exploring quantum control landscapes: Topology, features, and optimization scaling

    International Nuclear Information System (INIS)

    Moore, Katharine W.; Rabitz, Herschel

    2011-01-01

    Quantum optimal control experiments and simulations have successfully manipulated the dynamics of systems ranging from atoms to biomolecules. Surprisingly, these collective works indicate that the effort (i.e., the number of algorithmic iterations) required to find an optimal control field appears to be essentially invariant to the complexity of the system. The present work explores this matter in a series of systematic optimizations of the state-to-state transition probability on model quantum systems with the number of states N ranging from 5 through 100. The optimizations occur over a landscape defined by the transition probability as a function of the control field. Previous theoretical studies on the topology of quantum control landscapes established that they should be free of suboptimal traps under reasonable physical conditions. The simulations in this work include nearly 5000 individual optimization test cases, all of which confirm this prediction by fully achieving optimal population transfer of at least 99.9% on careful attention to numerical procedures to ensure that the controls are free of constraints. Collectively, the simulation results additionally show invariance of required search effort to system dimension N. This behavior is rationalized in terms of the structural features of the underlying control landscape. The very attractive observed scaling with system complexity may be understood by considering the distance traveled on the control landscape during a search and the magnitude of the control landscape slope. Exceptions to this favorable scaling behavior can arise when the initial control field fluence is too large or when the target final state recedes from the initial state as N increases.

  19. Optimization of Laminated Composite Structures

    DEFF Research Database (Denmark)

    Henrichsen, Søren Randrup

    of the contributions of the PhD project are included in the second part of the thesis. Paper A presents a framework for free material optimization where commercially available finite element analysis software is used as analysis tool. Robust buckling optimization of laminated composite structures by including...... allows for a higher degree of tailoring of the resulting material. To enable better utilization of the composite materials, optimum design procedures can be used to assist the engineer. This PhD thesis is focused on developing numerical methods for optimization of laminated composite structures...... nonlinear analysis of structures, buckling and post-buckling analysis of structures, and formulations for optimization of structures considering stiffness, buckling, and post-buckling criteria. Lastly, descriptions, main findings, and conclusions of the papers are presented. The papers forming the basis...

  20. Scope Oriented Thermoeconomic analysis of energy systems. Part II: Formation Structure of Optimality for robust design

    International Nuclear Information System (INIS)

    Piacentino, Antonio; Cardona, Ennio

    2010-01-01

    This paper represents the Part II of a paper in two parts. In Part I the fundamentals of Scope Oriented Thermoeconomics have been introduced, showing a scarce potential for the cost accounting of existing plants; in this Part II the same concepts are applied to the optimization of a small set of design variables for a vapour compression chiller. The method overcomes the limit of most conventional optimization techniques, which are usually based on hermetic algorithms not enabling the energy analyst to recognize all the margins for improvement. The Scope Oriented Thermoeconomic optimization allows us to disassemble the optimization process, thus recognizing the Formation Structure of Optimality, i.e. the specific influence of any thermodynamic and economic parameter in the path toward the optimal design. Finally, the potential applications of such an in-depth understanding of the inner driving forces of the optimization are discussed in the paper, with a particular focus on the sensitivity analysis to the variation of energy and capital costs and on the actual operation-oriented design.

  1. Bessel-function analysis of the optimized star coupler for uniform power splitting.

    Science.gov (United States)

    Song, G Hugh; Park, Mahn Yong

    2004-08-01

    An optimized N x N planar optic star coupler that utilizes directional coupling of arrayed waveguides for uniform power splitting is analyzed on the basis of special properties of the involved Bessel-function series. The analysis has provided a remarkably simple, novel basic design formula for such a device with much needed physical insights into the unique diffraction properties. For the analysis of diffraction from the end of directionally coupled arrayed waveguides, many useful formulas around the Bessel functions, such as the addition theorem and the Kepler-Bessel series, have been given in new forms.

  2. Analysis of multicriteria models application for selection of an optimal artificial lift method in oil production

    Directory of Open Access Journals (Sweden)

    Crnogorac Miroslav P.

    2016-01-01

    Full Text Available In the world today for the exploitation of oil reservoirs by artificial lift methods are applied different types of deep pumps (piston, centrifugal, screw, hydraulic, water jet pumps and gas lift (continuous, intermittent and plunger. Maximum values of oil production achieved by these exploitation methods are significantly different. In order to select the optimal exploitation method of oil well, the multicriteria analysis models are used. In this paper is presented an analysis of the multicriteria model's application known as VIKOR, TOPSIS, ELECTRE, AHP and PROMETHEE for selection of optimal exploitation method for typical oil well at Serbian exploration area. Ranking results of applicability of the deep piston pumps, hydraulic pumps, screw pumps, gas lift method and electric submersible centrifugal pumps, indicated that in the all above multicriteria models except in PROMETHEE, the optimal method of exploitation are deep piston pumps and gas lift.

  3. Nonlinear optimization

    CERN Document Server

    Ruszczynski, Andrzej

    2011-01-01

    Optimization is one of the most important areas of modern applied mathematics, with applications in fields from engineering and economics to finance, statistics, management science, and medicine. While many books have addressed its various aspects, Nonlinear Optimization is the first comprehensive treatment that will allow graduate students and researchers to understand its modern ideas, principles, and methods within a reasonable time, but without sacrificing mathematical precision. Andrzej Ruszczynski, a leading expert in the optimization of nonlinear stochastic systems, integrates the theory and the methods of nonlinear optimization in a unified, clear, and mathematically rigorous fashion, with detailed and easy-to-follow proofs illustrated by numerous examples and figures. The book covers convex analysis, the theory of optimality conditions, duality theory, and numerical methods for solving unconstrained and constrained optimization problems. It addresses not only classical material but also modern top...

  4. Optimal growth when environmental quality is a research asset

    DEFF Research Database (Denmark)

    Groth, Christian; Ricci, Francesco

    2011-01-01

    We advance an original assumption whereby a good state of the environment positively affects labor productivity in R&D such that deteriorating environmental quality negatively impacts R&D. We study the implications of this assumption for the optimal solution in an R&D-based model of growth, where......, we find that it is optimal to re-allocate employment to R&D in line with productivity changes. If environmental quality recovers only partially from pollution, R&D effort optimally begins above its long-run level, then progressively declines to a minimum and eventually increases to its steady...

  5. Flat-plate photovoltaic array design optimization

    Science.gov (United States)

    Ross, R. G., Jr.

    1980-01-01

    An analysis is presented which integrates the results of specific studies in the areas of photovoltaic structural design optimization, optimization of array series/parallel circuit design, thermal design optimization, and optimization of environmental protection features. The analysis is based on minimizing the total photovoltaic system life-cycle energy cost including repair and replacement of failed cells and modules. This approach is shown to be a useful technique for array optimization, particularly when time-dependent parameters such as array degradation and maintenance are involved.

  6. Configuration space analysis of common cost functions in radiotherapy beam-weight optimization algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rowbottom, Carl Graham [Joint Department of Physics, Institute of Cancer Research and the Royal Marsden NHS Trust, Sutton, Surrey (United Kingdom); Webb, Steve [Joint Department of Physics, Institute of Cancer Research and the Royal Marsden NHS Trust, Sutton, Surrey (United Kingdom)

    2002-01-07

    The successful implementation of downhill search engines in radiotherapy optimization algorithms depends on the absence of local minima in the search space. Such techniques are much faster than stochastic optimization methods but may become trapped in local minima if they exist. A technique known as 'configuration space analysis' was applied to examine the search space of cost functions used in radiotherapy beam-weight optimization algorithms. A downhill-simplex beam-weight optimization algorithm was run repeatedly to produce a frequency distribution of final cost values. By plotting the frequency distribution as a function of final cost, the existence of local minima can be determined. Common cost functions such as the quadratic deviation of dose to the planning target volume (PTV), integral dose to organs-at-risk (OARs), dose-threshold and dose-volume constraints for OARs were studied. Combinations of the cost functions were also considered. The simple cost function terms such as the quadratic PTV dose and integral dose to OAR cost function terms are not susceptible to local minima. In contrast, dose-threshold and dose-volume OAR constraint cost function terms are able to produce local minima in the example case studied. (author)

  7. Predicting Consumer Effort in Finding and Paying for Health Care: Expert Interviews and Claims Data Analysis.

    Science.gov (United States)

    Long, Sandra; Monsen, Karen A; Pieczkiewicz, David; Wolfson, Julian; Khairat, Saif

    2017-10-12

    For consumers to accept and use a health care information system, it must be easy to use, and the consumer must perceive it as being free from effort. Finding health care providers and paying for care are tasks that must be done to access treatment. These tasks require effort on the part of the consumer and can be frustrating when the goal of the consumer is primarily to receive treatments for better health. The aim of this study was to determine the factors that result in consumer effort when finding accessible health care. Having an understanding of these factors will help define requirements when designing health information systems. A panel of 12 subject matter experts was consulted and the data from 60 million medical claims were used to determine the factors contributing to effort. Approximately 60 million claims were processed by the health care insurance organization in a 12-month duration with the population defined. Over 292 million diagnoses from claims were used to validate the panel input. The results of the study showed that the number of people in the consumer's household, number of visits to providers outside the consumer's insurance network, number of adjusted and denied medical claims, and number of consumer inquiries are a proxy for the level of effort in finding and paying for care. The effort level, so measured and weighted per expert panel recommendations, differed by diagnosis. This study provides an understanding of how consumers must put forth effort when engaging with a health care system to access care. For higher satisfaction and acceptance results, health care payers ideally will design and develop systems that facilitate an understanding of how to avoid denied claims, educate on the payment of claims to avoid adjustments, and quickly find providers of affordable care. ©Sandra Long, Karen A. Monsen, David Pieczkiewicz, Julian Wolfson, Saif Khairat. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 12.10.2017.

  8. An intelligent approach to optimize the EDM process parameters using utility concept and QPSO algorithm

    Directory of Open Access Journals (Sweden)

    Chinmaya P. Mohanty

    2017-04-01

    Full Text Available Although significant research has gone into the field of electrical discharge machining (EDM, analysis related to the machining efficiency of the process with different electrodes has not been adequately made. Copper and brass are frequently used as electrode materials but graphite can be used as a potential electrode material due to its high melting point temperature and good electrical conductivity. In view of this, the present work attempts to compare the machinability of copper, graphite and brass electrodes while machining Inconel 718 super alloy. Taguchi’s L27 orthogonal array has been employed to collect data for the study and analyze effect of machining parameters on performance measures. The important performance measures selected for this study are material removal rate, tool wear rate, surface roughness and radial overcut. Machining parameters considered for analysis are open circuit voltage, discharge current, pulse-on-time, duty factor, flushing pressure and electrode material. From the experimental analysis, it is observed that electrode material, discharge current and pulse-on-time are the important parameters for all the performance measures. Utility concept has been implemented to transform a multiple performance characteristics into an equivalent performance characteristic. Non-linear regression analysis is carried out to develop a model relating process parameters and overall utility index. Finally, the quantum behaved particle swarm optimization (QPSO and particle swarm optimization (PSO algorithms have been used to compare the optimal level of cutting parameters. Results demonstrate the elegance of QPSO in terms of convergence and computational effort. The optimal parametric setting obtained through both the approaches is validated by conducting confirmation experiments.

  9. Optimal Bilinear Control of Gross--Pitaevskii Equations

    KAUST Repository

    Hintermü ller, Michael; Marahrens, Daniel; Markowich, Peter A.; Sparber, Christof

    2013-01-01

    A mathematical framework for optimal bilinear control of nonlinear Schrödinger equations of Gross--Pitaevskii type arising in the description of Bose--Einstein condensates is presented. The obtained results generalize earlier efforts found in the literature in several aspects. In particular, the cost induced by the physical workload over the control process is taken into account rather than the often used L^2- or H^1-norms for the cost of the control action. Well-posedness of the problem and existence of an optimal control are proved. In addition, the first order optimality system is rigorously derived. Also a numerical solution method is proposed, which is based on a Newton-type iteration, and used to solve several coherent quantum control problems.

  10. FREQUENCY ANALYSIS OF RLE-BLOCKS REPETITIONS IN THE SERIES OF BINARY CODES WITH OPTIMAL MINIMAX CRITERION OF AUTOCORRELATION FUNCTION

    Directory of Open Access Journals (Sweden)

    A. A. Kovylin

    2013-01-01

    Full Text Available The article describes the problem of searching for binary pseudo-random sequences with quasi-ideal autocorrelation function, which are to be used in contemporary communication systems, including mobile and wireless data transfer interfaces. In the synthesis of binary sequences sets, the target set is manning them based on the minimax criterion by which a sequence is considered to be optimal according to the intended application. In the course of the research the optimal sequences with order of up to 52 were obtained; the analysis of Run Length Encoding was carried out. The analysis showed regularities in the distribution of series number of different lengths in the codes that are optimal on the chosen criteria, which would make it possible to optimize the searching process for such codes in the future.

  11. Analysis of optimal Reynolds number for developing laminar forced convection in double sine ducts based on entropy generation minimization principle

    International Nuclear Information System (INIS)

    Ko, T.H.

    2006-01-01

    In the present paper, the entropy generation and optimal Reynolds number for developing forced convection in a double sine duct with various wall heat fluxes, which frequently occurs in plate heat exchangers, are studied based on the entropy generation minimization principle by analytical thermodynamic analysis as well as numerical investigation. According to the thermodynamic analysis, a very simple expression for the optimal Reynolds number for the double sine duct as a function of mass flow rate, wall heat flux, working fluid and geometric dimensions is proposed. In the numerical simulations, the investigated Reynolds number (Re) covers the range from 86 to 2000 and the wall heat flux (q'') varies as 160, 320 and 640 W/m 2 . From the numerical simulation of the developing laminar forced convection in the double sine duct, the effect of Reynolds number on entropy generation in the duct has been examined, through which the optimal Reynolds number with minimal entropy generation is detected. The optimal Reynolds number obtained from the analytical thermodynamic analysis is compared with the one from the numerical solutions and is verified to have a similar magnitude of entropy generation as the minimal entropy generation predicted by the numerical simulations. The optimal analysis provided in the present paper gives worthy information for heat exchanger design, since the thermal system could have the least irreversibility and best exergy utilization if the optimal Re can be used according to practical design conditions

  12. Service network design of bike sharing systems analysis and optimization

    CERN Document Server

    Vogel, Patrick

    2016-01-01

    This monograph presents a tactical planning approach for service network design in metropolitan areas. Designing the service network requires the suitable aggregation of demand data as well as the anticipation of operational relocation decisions. To this end, an integrated approach of data analysis and mathematical optimization is introduced. The book also includes a case study based on real-world data to demonstrate the benefit of the proposed service network design approach. The target audience comprises primarily research experts in the field of traffic engineering, but the book may also be beneficial for graduate students.

  13. Discrete-continuous analysis of optimal equipment replacement

    OpenAIRE

    YATSENKO, Yuri; HRITONENKO, Natali

    2008-01-01

    In Operations Research, the equipment replacement process is usually modeled in discrete time. The optimal replacement strategies are found from discrete (or integer) programming problems, well known for their analytic and computational complexity. An alternative approach is represented by continuous-time vintage capital models that explicitly involve the equipment lifetime and are described by nonlinear integral equations. Then the optimal replacement is determined via the opt...

  14. Optimal coordination and control of posture and movements.

    Science.gov (United States)

    Johansson, Rolf; Fransson, Per-Anders; Magnusson, Måns

    2009-01-01

    This paper presents a theoretical model of stability and coordination of posture and locomotion, together with algorithms for continuous-time quadratic optimization of motion control. Explicit solutions to the Hamilton-Jacobi equation for optimal control of rigid-body motion are obtained by solving an algebraic matrix equation. The stability is investigated with Lyapunov function theory and it is shown that global asymptotic stability holds. It is also shown how optimal control and adaptive control may act in concert in the case of unknown or uncertain system parameters. The solution describes motion strategies of minimum effort and variance. The proposed optimal control is formulated to be suitable as a posture and movement model for experimental validation and verification. The combination of adaptive and optimal control makes this algorithm a candidate for coordination and control of functional neuromuscular stimulation as well as of prostheses. Validation examples with experimental data are provided.

  15. Optimally segmented magnetic structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bahl, Christian; Bjørk, Rasmus

    We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... is not available.We will illustrate the results for magnet design problems from different areas, such as electric motors/generators (as the example in the picture), beam focusing for particle accelerators and magnetic refrigeration devices.......We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... magnets[1][2]. However, the powerful rare-earth magnets are generally expensive, so both the scientific and industrial communities have devoted a lot of effort into developing suitable design methods. Even so, many magnet optimization algorithms either are based on heuristic approaches[3...

  16. Optimization of process parameters in drilling of fibre hybrid composite using Taguchi and grey relational analysis

    Science.gov (United States)

    Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.

    2017-03-01

    Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.

  17. Analysis and optimization of Love wave liquid sensors.

    Science.gov (United States)

    Jakoby, B; Vellekoop, M J

    1998-01-01

    Love wave sensors are highly sensitive microacoustic devices, which are well suited for liquid sensing applications thanks to the shear polarization of the wave. The sensing mechanism thereby relies on the mechanical (or acoustic) interaction of the device with the liquid. The successful utilization of Love wave devices for this purpose requires proper shielding to avoid unwanted electric interaction of the liquid with the wave and the transducers. In this work we describe the effects of this electric interaction and the proper design of a shield to prevent it. We present analysis methods, which illustrate the impact of the interaction and which help to obtain an optimized design of the proposed shield. We also present experimental results for devices that have been fabricated according to these design rules.

  18. Optimization of Spacecraft Rendezvous and Docking using Interval Analysis

    NARCIS (Netherlands)

    Van Kampen, E.; Chu, Q.P.; Mulder, J.A.

    2010-01-01

    This paper applies interval optimization to the fixed-time multiple impulse rendezvous and docking problem. Current methods for solving this type of optimization problem include for example genetic algorithms and gradient based optimization. Unlike these methods, interval methods can guarantee that

  19. More efficient optimization of long-term water supply portfolios

    Science.gov (United States)

    Kirsch, Brian R.; Characklis, Gregory W.; Dillard, Karen E. M.; Kelley, C. T.

    2009-03-01

    The use of temporary transfers, such as options and leases, has grown as utilities attempt to meet increases in demand while reducing dependence on the expansion of costly infrastructure capacity (e.g., reservoirs). Earlier work has been done to construct optimal portfolios comprising firm capacity and transfers, using decision rules that determine the timing and volume of transfers. However, such work has only focused on the short-term (e.g., 1-year scenarios), which limits the utility of these planning efforts. Developing multiyear portfolios can lead to the exploration of a wider range of alternatives but also increases the computational burden. This work utilizes a coupled hydrologic-economic model to simulate the long-term performance of a city's water supply portfolio. This stochastic model is linked with an optimization search algorithm that is designed to handle the high-frequency, low-amplitude noise inherent in many simulations, particularly those involving expected values. This noise is detrimental to the accuracy and precision of the optimized solution and has traditionally been controlled by investing greater computational effort in the simulation. However, the increased computational effort can be substantial. This work describes the integration of a variance reduction technique (control variate method) within the simulation/optimization as a means of more efficiently identifying minimum cost portfolios. Random variation in model output (i.e., noise) is moderated using knowledge of random variations in stochastic input variables (e.g., reservoir inflows, demand), thereby reducing the computing time by 50% or more. Using these efficiency gains, water supply portfolios are evaluated over a 10-year period in order to assess their ability to reduce costs and adapt to demand growth, while still meeting reliability goals. As a part of the evaluation, several multiyear option contract structures are explored and compared.

  20. COST ANALYSIS AND OPTIMIZATION IN THE LOGISTIC SUPPLY CHAIN USING THE SIMPROLOGIC PROGRAM

    OpenAIRE

    Ilona MAŃKA; Adam MAŃKA

    2016-01-01

    This article aims to characterize the authorial SimProLOGIC program, version 2.1, which enables one to conduct a cost analysis of individual links, as well as the entire logistic supply chain (LSC). This article also presents an example of the analysis of the parameters, which characterize the supplier of subsystems in the examined logistic chain, and the results of the initial optimization, which makes it possible to improve the economic balance, as well as the level of customer servic...