WorldWideScience

Sample records for optimal experimental design

  1. Optimal experimental design with R

    CERN Document Server

    Rasch, Dieter; Verdooren, L R; Gebhardt, Albrecht

    2011-01-01

    Experimental design is often overlooked in the literature of applied and mathematical statistics: statistics is taught and understood as merely a collection of methods for analyzing data. Consequently, experimenters seldom think about optimal design, including prerequisites such as the necessary sample size needed for a precise answer for an experimental question. Providing a concise introduction to experimental design theory, Optimal Experimental Design with R: Introduces the philosophy of experimental design Provides an easy process for constructing experimental designs and calculating necessary sample size using R programs Teaches by example using a custom made R program package: OPDOE Consisting of detailed, data-rich examples, this book introduces experimenters to the philosophy of experimentation, experimental design, and data collection. It gives researchers and statisticians guidance in the construction of optimum experimental designs using R programs, including sample size calculations, hypothesis te...

  2. Optimal Bayesian Experimental Design for Combustion Kinetics

    KAUST Repository

    Huan, Xun

    2011-01-04

    Experimental diagnostics play an essential role in the development and refinement of chemical kinetic models, whether for the combustion of common complex hydrocarbons or of emerging alternative fuels. Questions of experimental design—e.g., which variables or species to interrogate, at what resolution and under what conditions—are extremely important in this context, particularly when experimental resources are limited. This paper attempts to answer such questions in a rigorous and systematic way. We propose a Bayesian framework for optimal experimental design with nonlinear simulation-based models. While the framework is broadly applicable, we use it to infer rate parameters in a combustion system with detailed kinetics. The framework introduces a utility function that reflects the expected information gain from a particular experiment. Straightforward evaluation (and maximization) of this utility function requires Monte Carlo sampling, which is infeasible with computationally intensive models. Instead, we construct a polynomial surrogate for the dependence of experimental observables on model parameters and design conditions, with the help of dimension-adaptive sparse quadrature. Results demonstrate the efficiency and accuracy of the surrogate, as well as the considerable effectiveness of the experimental design framework in choosing informative experimental conditions.

  3. Optimal experimental design for placement of boreholes

    Science.gov (United States)

    Padalkina, Kateryna; Bücker, H. Martin; Seidler, Ralf; Rath, Volker; Marquart, Gabriele; Niederau, Jan; Herty, Michael

    2014-05-01

    Drilling for deep resources is an expensive endeavor. Among the many problems finding the optimal drilling location for boreholes is one of the challenging questions. We contribute to this discussion by using a simulation based assessment of possible future borehole locations. We study the problem of finding a new borehole location in a given geothermal reservoir in terms of a numerical optimization problem. In a geothermal reservoir the temporal and spatial distribution of temperature and hydraulic pressure may be simulated using the coupled differential equations for heat transport and mass and momentum conservation for Darcy flow. Within this model the permeability and thermal conductivity are dependent on the geological layers present in the subsurface model of the reservoir. In general, those values involve some uncertainty making it difficult to predict actual heat source in the ground. Within optimal experimental the question is which location and to which depth to drill the borehole in order to estimate conductivity and permeability with minimal uncertainty. We introduce a measure for computing the uncertainty based on simulations of the coupled differential equations. The measure is based on the Fisher information matrix of temperature data obtained through the simulations. We assume that the temperature data is available within the full borehole. A minimization of the measure representing the uncertainty in the unknown permeability and conductivity parameters is performed to determine the optimal borehole location. We present the theoretical framework as well as numerical results for several 2d subsurface models including up to six geological layers. Also, the effect of unknown layers on the introduced measure is studied. Finally, to obtain a more realistic estimate of optimal borehole locations, we couple the optimization to a cost model for deep drilling problems.

  4. Optimal Experimental Design for Model Discrimination

    Science.gov (United States)

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  5. Achieving optimal SERS through enhanced experimental design.

    Science.gov (United States)

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J; Goodacre, Royston

    2016-01-01

    One of the current limitations surrounding surface-enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal-based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.

  6. Use of Experimental Design for Peuhl Cheese Process Optimization ...

    African Journals Online (AJOL)

    Use of Experimental Design for Peuhl Cheese Process Optimization. ... Journal of Applied Sciences and Environmental Management ... This work consisting in use of a central composite design enables the determination of optimal process conditions concerning: leaf extract volume added (7 mL), heating temperature ...

  7. Optimal Experimental Design for Large-Scale Bayesian Inverse Problems

    KAUST Repository

    Ghattas, Omar

    2014-01-01

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation

  8. Fast Bayesian optimal experimental design and its applications

    KAUST Repository

    Long, Quan

    2015-01-01

    We summarize our Laplace method and multilevel method of accelerating the computation of the expected information gain in a Bayesian Optimal Experimental Design (OED). Laplace method is a widely-used method to approximate an integration

  9. Bayesian optimal experimental design for the Shock-tube experiment

    International Nuclear Information System (INIS)

    Terejanu, G; Bryant, C M; Miki, K

    2013-01-01

    The sequential optimal experimental design formulated as an information-theoretic sensitivity analysis is applied to the ignition delay problem using real experimental. The optimal design is obtained by maximizing the statistical dependence between the model parameters and observables, which is quantified in this study using mutual information. This is naturally posed in the Bayesian framework. The study shows that by monitoring the information gain after each measurement update, one can design a stopping criteria for the experimental process which gives a minimal set of experiments to efficiently learn the Arrhenius parameters.

  10. Optimizing Nuclear Reaction Analysis (NRA) using Bayesian Experimental Design

    International Nuclear Information System (INIS)

    Toussaint, Udo von; Schwarz-Selinger, Thomas; Gori, Silvio

    2008-01-01

    Nuclear Reaction Analysis with 3 He holds the promise to measure Deuterium depth profiles up to large depths. However, the extraction of the depth profile from the measured data is an ill-posed inversion problem. Here we demonstrate how Bayesian Experimental Design can be used to optimize the number of measurements as well as the measurement energies to maximize the information gain. Comparison of the inversion properties of the optimized design with standard settings reveals huge possible gains. Application of the posterior sampling method allows to optimize the experimental settings interactively during the measurement process.

  11. Optimal Experimental Design of Furan Shock Tube Kinetic Experiments

    KAUST Repository

    Kim, Daesang

    2015-01-07

    A Bayesian optimal experimental design methodology has been developed and applied to refine the rate coefficients of elementary reactions in Furan combustion. Furans are considered as potential renewable fuels. We focus on the Arrhenius rates of Furan + OH ↔ Furyl-2 + H2O and Furan ↔ OH Furyl-3 + H2O, and rely on the OH consumption rate as experimental observable. A polynomial chaos surrogate is first constructed using an adaptive pseudo-spectral projection algorithm. The PC surrogate is then exploited in conjunction with a fast estimation of the expected information gain in order to determine the optimal design in the space of initial temperatures and OH concentrations.

  12. Optimal Experimental Design for Large-Scale Bayesian Inverse Problems

    KAUST Repository

    Ghattas, Omar

    2014-01-06

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation energies in the reaction rate expressions. The control parameters are the initial mixture composition and the temperature. The approach is based on first building a polynomial based surrogate model for the observables relevant to the shock tube experiments. Based on these surrogates, a novel MAP based approach is used to estimate the expected information gain in the proposed experiments, and to select the best experimental set-ups yielding the optimal expected information gains. The validity of the approach is tested using synthetic data generated by sampling the PC surrogate. We finally outline a methodology for validation using actual laboratory experiments, and extending experimental design methodology to the cases where the control parameters are noisy.

  13. Optimizing an experimental design for an electromagnetic experiment

    Science.gov (United States)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  14. Fast Bayesian optimal experimental design and its applications

    KAUST Repository

    Long, Quan

    2015-01-07

    We summarize our Laplace method and multilevel method of accelerating the computation of the expected information gain in a Bayesian Optimal Experimental Design (OED). Laplace method is a widely-used method to approximate an integration in statistics. We analyze this method in the context of optimal Bayesian experimental design and extend this method from the classical scenario, where a single dominant mode of the parameters can be completely-determined by the experiment, to the scenarios where a non-informative parametric manifold exists. We show that by carrying out this approximation the estimation of the expected Kullback-Leibler divergence can be significantly accelerated. While Laplace method requires a concentration of measure, multi-level Monte Carlo method can be used to tackle the problem when there is a lack of measure concentration. We show some initial results on this approach. The developed methodologies have been applied to various sensor deployment problems, e.g., impedance tomography and seismic source inversion.

  15. Bayesian optimal experimental design for priors of compact support

    KAUST Repository

    Long, Quan

    2016-01-08

    In this study, we optimize the experimental setup computationally by optimal experimental design (OED) in a Bayesian framework. We approximate the posterior probability density functions (pdf) using truncated Gaussian distributions in order to account for the bounded domain of the uniform prior pdf of the parameters. The underlying Gaussian distribution is obtained in the spirit of the Laplace method, more precisely, the mode is chosen as the maximum a posteriori (MAP) estimate, and the covariance is chosen as the negative inverse of the Hessian of the misfit function at the MAP estimate. The model related entities are obtained from a polynomial surrogate. The optimality, quantified by the information gain measures, can be estimated efficiently by a rejection sampling algorithm against the underlying Gaussian probability distribution, rather than against the true posterior. This approach offers a significant error reduction when the magnitude of the invariants of the posterior covariance are comparable to the size of the bounded domain of the prior. We demonstrate the accuracy and superior computational efficiency of our method for shock-tube experiments aiming to measure the model parameters of a key reaction which is part of the complex kinetic network describing the hydrocarbon oxidation. In the experiments, the initial temperature and fuel concentration are optimized with respect to the expected information gain in the estimation of the parameters of the target reaction rate. We show that the expected information gain surface can change its shape dramatically according to the level of noise introduced into the synthetic data. The information that can be extracted from the data saturates as a logarithmic function of the number of experiments, and few experiments are needed when they are conducted at the optimal experimental design conditions.

  16. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki; Long, Quan; Scavino, Marco; Tempone, Raul

    2015-01-01

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  17. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki

    2015-01-07

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  18. Fast Bayesian Optimal Experimental Design for Seismic Source Inversion

    KAUST Repository

    Long, Quan; Motamed, Mohammad; Tempone, Raul

    2016-01-01

    We develop a fast method for optimally designing experiments [1] in the context of statistical seismic source inversion [2]. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by the elastic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the true parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem.

  19. Fast Bayesian optimal experimental design for seismic source inversion

    KAUST Repository

    Long, Quan

    2015-07-01

    We develop a fast method for optimally designing experiments in the context of statistical seismic source inversion. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by elastodynamic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the "true" parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem. © 2015 Elsevier B.V.

  20. Fast Bayesian Optimal Experimental Design for Seismic Source Inversion

    KAUST Repository

    Long, Quan

    2016-01-06

    We develop a fast method for optimally designing experiments [1] in the context of statistical seismic source inversion [2]. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by the elastic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the true parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem.

  1. Simulation-based optimal Bayesian experimental design for nonlinear systems

    KAUST Repository

    Huan, Xun

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters.Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics. © 2012 Elsevier Inc.

  2. Simulation-based optimal Bayesian experimental design for nonlinear systems

    KAUST Repository

    Huan, Xun; Marzouk, Youssef M.

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical

  3. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki

    2015-05-12

    Experimental design can be vital when experiments are resource-exhaustive and time-consuming. In this work, we carry out experimental design in the Bayesian framework. To measure the amount of information that can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data about the model parameters. One of the major difficulties in evaluating the expected information gain is that it naturally involves nested integration over a possibly high dimensional domain. We use the Multilevel Monte Carlo (MLMC) method to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, MLMC can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the MLMC method imposes fewer assumptions, such as the asymptotic concentration of posterior measures, required for instance by the Laplace approximation (LA). We test the MLMC method using two numerical examples. The first example is the design of sensor deployment for a Darcy flow problem governed by a one-dimensional Poisson equation. We place the sensors in the locations where the pressure is measured, and we model the conductivity field as a piecewise constant random vector with two parameters. The second one is chemical Enhanced Oil Recovery (EOR) core flooding experiment assuming homogeneous permeability. We measure the cumulative oil recovery, from a horizontal core flooded by water, surfactant and polymer, for different injection rates. The model parameters consist of the endpoint relative permeabilities, the residual saturations and the relative permeability exponents for the three phases: water, oil and

  4. Experimental design applied to the optimization and partial ...

    African Journals Online (AJOL)

    The objective of this work was to optimize the medium composition for maximum pectin-methylesterase (PME) production from a newly isolated strain of Penicillium brasilianum by submerged fermentation. A Plackett-Burman design was first used for the screening of most important factors, followed by a 23 full ...

  5. Surface laser marking optimization using an experimental design approach

    Science.gov (United States)

    Brihmat-Hamadi, F.; Amara, E. H.; Lavisse, L.; Jouvard, J. M.; Cicala, E.; Kellou, H.

    2017-04-01

    Laser surface marking is performed on a titanium substrate using a pulsed frequency doubled Nd:YAG laser ( λ= 532 nm, τ pulse=5 ns) to process the substrate surface under normal atmospheric conditions. The aim of the work is to investigate, following experimental and statistical approaches, the correlation between the process parameters and the response variables (output), using a Design of Experiment method (DOE): Taguchi methodology and a response surface methodology (RSM). A design is first created using MINTAB program, and then the laser marking process is performed according to the planned design. The response variables; surface roughness and surface reflectance were measured for each sample, and incorporated into the design matrix. The results are then analyzed and the RSM model is developed and verified for predicting the process output for the given set of process parameters values. The analysis shows that the laser beam scanning speed is the most influential operating factor followed by the laser pumping intensity during marking, while the other factors show complex influences on the objective functions.

  6. On the construction of experimental designs for a given task by jointly optimizing several quality criteria: Pareto-optimal experimental designs.

    Science.gov (United States)

    Sánchez, M S; Sarabia, L A; Ortiz, M C

    2012-11-19

    Experimental designs for a given task should be selected on the base of the problem being solved and of some criteria that measure their quality. There are several such criteria because there are several aspects to be taken into account when making a choice. The most used criteria are probably the so-called alphabetical optimality criteria (for example, the A-, E-, and D-criteria related to the joint estimation of the coefficients, or the I- and G-criteria related to the prediction variance). Selecting a proper design to solve a problem implies finding a balance among these several criteria that measure the performance of the design in different aspects. Technically this is a problem of multi-criteria optimization, which can be tackled from different views. The approach presented here addresses the problem in its real vector nature, so that ad hoc experimental designs are generated with an algorithm based on evolutionary algorithms to find the Pareto-optimal front. There is not theoretical limit to the number of criteria that can be studied and, contrary to other approaches, no just one experimental design is computed but a set of experimental designs all of them with the property of being Pareto-optimal in the criteria needed by the user. Besides, the use of an evolutionary algorithm makes it possible to search in both continuous and discrete domains and avoid the need of having a set of candidate points, usual in exchange algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Optimization of fast disintegration tablets using pullulan as diluent by central composite experimental design

    OpenAIRE

    Patel, Dipil; Chauhan, Musharraf; Patel, Ravi; Patel, Jayvadan

    2012-01-01

    The objective of this work was to apply central composite experimental design to investigate main and interaction effect of formulation parameters in optimizing novel fast disintegration tablets formulation using pullulan as diluents. Face centered central composite experimental design was employed to optimize fast disintegration tablet formulation. The variables studied were concentration of diluents (pullulan, X1), superdisintigrant (sodium starch glycolate, X2), and direct compression aid ...

  8. A projection method for under determined optimal experimental designs

    KAUST Repository

    Long, Quan; Scavino, Marco; Tempone, Raul; Wang, Suojin

    2014-01-01

    A new implementation, based on the Laplace approximation, was developed in (Long, Scavino, Tempone, & Wang 2013) to accelerate the estimation of the post–experimental expected information gains in the model parameters and predictive quantities of interest. A closed–form approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general cases where the model parameters could not be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the corresponding Jacobian matrix, so that the information gain (Kullback–Leibler divergence) can be reduced to an integration against the marginal density of the transformed parameters which are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the projected posterior covariance matrix. To deal with the issue of dimensionality in a complex problem, we use Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under determined numerical examples.

  9. A projection method for under determined optimal experimental designs

    KAUST Repository

    Long, Quan

    2014-01-09

    A new implementation, based on the Laplace approximation, was developed in (Long, Scavino, Tempone, & Wang 2013) to accelerate the estimation of the post–experimental expected information gains in the model parameters and predictive quantities of interest. A closed–form approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general cases where the model parameters could not be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the corresponding Jacobian matrix, so that the information gain (Kullback–Leibler divergence) can be reduced to an integration against the marginal density of the transformed parameters which are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the projected posterior covariance matrix. To deal with the issue of dimensionality in a complex problem, we use Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under determined numerical examples.

  10. Optimization of natural lipstick formulation based on pitaya (Hylocereus polyrhizus) seed oil using D-optimal mixture experimental design.

    Science.gov (United States)

    Kamairudin, Norsuhaili; Gani, Siti Salwa Abd; Masoumi, Hamid Reza Fard; Hashim, Puziah

    2014-10-16

    The D-optimal mixture experimental design was employed to optimize the melting point of natural lipstick based on pitaya (Hylocereus polyrhizus) seed oil. The influence of the main lipstick components-pitaya seed oil (10%-25% w/w), virgin coconut oil (25%-45% w/w), beeswax (5%-25% w/w), candelilla wax (1%-5% w/w) and carnauba wax (1%-5% w/w)-were investigated with respect to the melting point properties of the lipstick formulation. The D-optimal mixture experimental design was applied to optimize the properties of lipstick by focusing on the melting point with respect to the above influencing components. The D-optimal mixture design analysis showed that the variation in the response (melting point) could be depicted as a quadratic function of the main components of the lipstick. The best combination of each significant factor determined by the D-optimal mixture design was established to be pitaya seed oil (25% w/w), virgin coconut oil (37% w/w), beeswax (17% w/w), candelilla wax (2% w/w) and carnauba wax (2% w/w). With respect to these factors, the 46.0 °C melting point property was observed experimentally, similar to the theoretical prediction of 46.5 °C. Carnauba wax is the most influential factor on this response (melting point) with its function being with respect to heat endurance. The quadratic polynomial model sufficiently fit the experimental data.

  11. Design of passive directional acoustic devices using Topology Optimization - from method to experimental validation

    DEFF Research Database (Denmark)

    Christiansen, Rasmus Ellebæk; Fernandez Grande, Efren

    2016-01-01

    emission in two dimensions and is experimentally validated using three dimensional prints of the optimized designs. The emitted fields exhibit a level difference of at least 15 dB on axis relative to the off-axis directions, over frequency bands of approximately an octave. It is demonstrated to be possible...

  12. A new experimental design method to optimize formulations focusing on a lubricant for hydrophilic matrix tablets.

    Science.gov (United States)

    Choi, Du Hyung; Shin, Sangmun; Khoa Viet Truong, Nguyen; Jeong, Seong Hoon

    2012-09-01

    A robust experimental design method was developed with the well-established response surface methodology and time series modeling to facilitate the formulation development process with magnesium stearate incorporated into hydrophilic matrix tablets. Two directional analyses and a time-oriented model were utilized to optimize the experimental responses. Evaluations of tablet gelation and drug release were conducted with two factors x₁ and x₂: one was a formulation factor (the amount of magnesium stearate) and the other was a processing factor (mixing time), respectively. Moreover, different batch sizes (100 and 500 tablet batches) were also evaluated to investigate an effect of batch size. The selected input control factors were arranged in a mixture simplex lattice design with 13 experimental runs. The obtained optimal settings of magnesium stearate for gelation were 0.46 g, 2.76 min (mixing time) for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The optimal settings for drug release were 0.33 g, 7.99 min for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The exact ratio and mixing time of magnesium stearate could be formulated according to the resulting hydrophilic matrix tablet properties. The newly designed experimental method provided very useful information for characterizing significant factors and hence to obtain optimum formulations allowing for a systematic and reliable experimental design method.

  13. Optimization of Natural Lipstick Formulation Based on Pitaya (Hylocereus polyrhizus Seed Oil Using D-Optimal Mixture Experimental Design

    Directory of Open Access Journals (Sweden)

    Norsuhaili Kamairudin

    2014-10-01

    Full Text Available The D-optimal mixture experimental design was employed to optimize the melting point of natural lipstick based on pitaya (Hylocereus polyrhizus seed oil. The influence of the main lipstick components—pitaya seed oil (10%–25% w/w, virgin coconut oil (25%–45% w/w, beeswax (5%–25% w/w, candelilla wax (1%–5% w/w and carnauba wax (1%–5% w/w—were investigated with respect to the melting point properties of the lipstick formulation. The D-optimal mixture experimental design was applied to optimize the properties of lipstick by focusing on the melting point with respect to the above influencing components. The D-optimal mixture design analysis showed that the variation in the response (melting point could be depicted as a quadratic function of the main components of the lipstick. The best combination of each significant factor determined by the D-optimal mixture design was established to be pitaya seed oil (25% w/w, virgin coconut oil (37% w/w, beeswax (17% w/w, candelilla wax (2% w/w and carnauba wax (2% w/w. With respect to these factors, the 46.0 °C melting point property was observed experimentally, similar to the theoretical prediction of 46.5 °C. Carnauba wax is the most influential factor on this response (melting point with its function being with respect to heat endurance. The quadratic polynomial model sufficiently fit the experimental data.

  14. Optimal experimental design in an epidermal growth factor receptor signalling and down-regulation model.

    Science.gov (United States)

    Casey, F P; Baird, D; Feng, Q; Gutenkunst, R N; Waterfall, J J; Myers, C R; Brown, K S; Cerione, R A; Sethna, J P

    2007-05-01

    We apply the methods of optimal experimental design to a differential equation model for epidermal growth factor receptor signalling, trafficking and down-regulation. The model incorporates the role of a recently discovered protein complex made up of the E3 ubiquitin ligase, Cbl, the guanine exchange factor (GEF), Cool-1 (beta -Pix) and the Rho family G protein Cdc42. The complex has been suggested to be important in disrupting receptor down-regulation. We demonstrate that the model interactions can accurately reproduce the experimental observations, that they can be used to make predictions with accompanying uncertainties, and that we can apply ideas of optimal experimental design to suggest new experiments that reduce the uncertainty on unmeasurable components of the system.

  15. Application of Iterative Robust Model-based Optimal Experimental Design for the Calibration of Biocatalytic Models

    DEFF Research Database (Denmark)

    Van Daele, Timothy; Gernaey, Krist V.; Ringborg, Rolf Hoffmeyer

    2017-01-01

    The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during...... experimentation is not actively used to optimise the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω......-transaminase catalysed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is a more accurate, but also a computationally more expensive method. As a result, an important deviation between both approaches...

  16. Experimental design and multicriteria decision making methods for the optimization of ice cream composition

    Directory of Open Access Journals (Sweden)

    Cristian Rojas

    2012-03-01

    Full Text Available The aim of the present work was to optimize the sensorial and technological features of ice cream. The experimental work was performed in two stages: 1 optimization of lactose enzymatic hydrolysis, and 2 optimization of the process and product. For the first stage a complete factorial design was developed, optimized using both response surface and the steepest ascent method. In the second stage a mixture design was performed, combining the process variables. The product with the best sensorial acceptance, high yield and low cost was selected. The acceptance of the product was developed by an untrained taster’s panel. As a main result the sensorial and technological features of the final product were improved, establishing the optimum parameters for its elaboration.

  17. Experimental design: Case studies of diagnostics optimization for W7-X

    International Nuclear Information System (INIS)

    Dreier, H.; Dinklage, A.; Fischer, R.; Hartfuss, H.-J.; Hirsch, M.; Kornejew, P.; Pasch, E.; Turkin, Yu.

    2005-01-01

    The preparation of diagnostics for Wendelstein 7-X is accompanied by diagnostics simulations and optimization. Starting from the physical objectives, the design of diagnostics should incorporate predictive modelling (e.g. transport modelling) and simulations of respective measurements. Although technical constraints are governing design considerations, it appears that several design parameters of different diagnostics can be optimized. However, a general formulation for fusion diagnostics design in terms of optimization is lacking. In this paper, first case studies of Bayesian experimental design aiming at applications on W7-X diagnostics preparation are presented. The information gain of a measurement is formulated as a utility function which is expressed in terms of the Kullback-Leibler divergence. Then, the expected range of data is to be included and the resulting expected utility represents the objective for optimization. Bayesian probability theory gives a framework allowing us for an appropriate formulation of the design problem in terms of probability distribution functions. Results are obtained for the information gain from interferometry and for the design of polychromators for Thomson scattering. For interferometry, studies of the choice of line-of-sights for optimum signal and for the reproduction of gradient positions are presented for circular, elliptical and W7-X geometries. For Thomson scattering, the design of filter transmissions for density and temperature measurements are discussed. (author)

  18. Entropy-Based Experimental Design for Optimal Model Discrimination in the Geosciences

    Directory of Open Access Journals (Sweden)

    Wolfgang Nowak

    2016-11-01

    Full Text Available Choosing between competing models lies at the heart of scientific work, and is a frequent motivation for experimentation. Optimal experimental design (OD methods maximize the benefit of experiments towards a specified goal. We advance and demonstrate an OD approach to maximize the information gained towards model selection. We make use of so-called model choice indicators, which are random variables with an expected value equal to Bayesian model weights. Their uncertainty can be measured with Shannon entropy. Since the experimental data are still random variables in the planning phase of an experiment, we use mutual information (the expected reduction in Shannon entropy to quantify the information gained from a proposed experimental design. For implementation, we use the Preposterior Data Impact Assessor framework (PreDIA, because it is free of the lower-order approximations of mutual information often found in the geosciences. In comparison to other studies in statistics, our framework is not restricted to sequential design or to discrete-valued data, and it can handle measurement errors. As an application example, we optimize an experiment about the transport of contaminants in clay, featuring the problem of choosing between competing isotherms to describe sorption. We compare the results of optimizing towards maximum model discrimination with an alternative OD approach that minimizes the overall predictive uncertainty under model choice uncertainty.

  19. Optimization of Protease Production from Aspergillus Oryzae Sp. Using Box-Behnken Experimental Design

    Directory of Open Access Journals (Sweden)

    G. Srinu Babu

    2007-01-01

    Full Text Available Protease production by Aspergillus oryzae was optimized in shake-flask cultures using Box-Behnken experimental design. An empirical model was developed through response surface methodology to describe the relationship between tested variable (peptone, glucose, soyabeanmeal and pH. Maximum enzyme activity was attained with Peptone at 4 g∕L; temperature at 30 °C glucose at 6 g∕L; 30 °C and pH at 10. Experimental verification of the model showed a validation of 95%, which is more than 3-fold increase compare to the basal medium.

  20. Issues and recent advances in optimal experimental design for site investigation (Invited)

    Science.gov (United States)

    Nowak, W.

    2013-12-01

    This presentation provides an overview over issues and recent advances in model-based experimental design for site exploration. The addressed issues and advances are (1) how to provide an adequate envelope to prior uncertainty, (2) how to define the information needs in a task-oriented manner, (3) how to measure the expected impact of a data set that it not yet available but only planned to be collected, and (4) how to perform best the optimization of the data collection plan. Among other shortcomings of the state-of-the-art, it is identified that there is a lack of demonstrator studies where exploration schemes based on expert judgment are compared to exploration schemes obtained by optimal experimental design. Such studies will be necessary do address the often voiced concern that experimental design is an academic exercise with little improvement potential over the well- trained gut feeling of field experts. When addressing this concern, a specific focus has to be given to uncertainty in model structure, parameterizations and parameter values, and to related surprises that data often bring about in field studies, but never in synthetic-data based studies. The background of this concern is that, initially, conceptual uncertainty may be so large that surprises are the rule rather than the exception. In such situations, field experts have a large body of experience in handling the surprises, and expert judgment may be good enough compared to meticulous optimization based on a model that is about to be falsified by the incoming data. In order to meet surprises accordingly and adapt to them, there needs to be a sufficient representation of conceptual uncertainty within the models used. Also, it is useless to optimize an entire design under this initial range of uncertainty. Thus, the goal setting of the optimization should include the objective to reduce conceptual uncertainty. A possible way out is to upgrade experimental design theory towards real-time interaction

  1. An Effective Experimental Optimization Method for Wireless Power Transfer System Design Using Frequency Domain Measurement

    Directory of Open Access Journals (Sweden)

    Sangyeong Jeong

    2017-10-01

    Full Text Available This paper proposes an experimental optimization method for a wireless power transfer (WPT system. The power transfer characteristics of a WPT system with arbitrary loads and various types of coupling and compensation networks can be extracted by frequency domain measurements. The various performance parameters of the WPT system, such as input real/imaginary/apparent power, power factor, efficiency, output power and voltage gain, can be accurately extracted in a frequency domain by a single passive measurement. Subsequently, the design parameters can be efficiently tuned by separating the overall design steps into two parts. The extracted performance parameters of the WPT system were validated with time-domain experiments.

  2. Fermilab D-0 Experimental Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    International Nuclear Information System (INIS)

    Krstulovich, S.F.

    1987-01-01

    This report is developed as part of the Fermilab D-0 Experimental Facility Project Title II Design Documentation Update. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis

  3. Optimization Method of a Low Cost, High Performance Ceramic Proppant by Orthogonal Experimental Design

    Science.gov (United States)

    Zhou, Y.; Tian, Y. M.; Wang, K. Y.; Li, G.; Zou, X. W.; Chai, Y. S.

    2017-09-01

    This study focused on optimization method of a ceramic proppant material with both low cost and high performance that met the requirements of Chinese Petroleum and Gas Industry Standard (SY/T 5108-2006). The orthogonal experimental design of L9(34) was employed to study the significance sequence of three factors, including weight ratio of white clay to bauxite, dolomite content and sintering temperature. For the crush resistance, both the range analysis and variance analysis reflected the optimally experimental condition was weight ratio of white clay to bauxite=3/7, dolomite content=3 wt.%, temperature=1350°C. For the bulk density, the most important factor was the sintering temperature, followed by the dolomite content, and then the ratio of white clay to bauxite.

  4. Factorial experimental design intended for the optimization of the alumina purification conditions

    Science.gov (United States)

    Brahmi, Mounaouer; Ba, Mohamedou; Hidri, Yassine; Hassen, Abdennaceur

    2018-04-01

    The objective of this study was to determine the optimal conditions by using the experimental design methodology for the removal of some impurities associated with the alumina. So, three alumina qualities of different origins were investigated under the same conditions. The application of full-factorial designs on the samples of different qualities of alumina has followed the removal rates of the sodium oxide. However, a factorial experimental design was developed to describe the elimination of sodium oxide associated with the alumina. The experimental results showed that chemical analyze followed by XRF prior treatment of the samples, provided a primary idea concerning these prevailing impurities. Therefore, it appeared that the sodium oxide constituted the largest amount among all impurities. After the application of experimental design, analysis of the effectors different factors and their interactions showed that to have a better result, we should reduce the alumina quantity investigated and by against increase the stirring time for the first two samples, whereas, it was necessary to increase the alumina quantity in the case of the third sample. To expand and improve this research, we should take into account all existing impurities, since we found during this investigation that the levels of partial impurities increased after the treatment.

  5. Experimental design for optimizing MALDI-TOF-MS analysis of palladium complexes

    Directory of Open Access Journals (Sweden)

    Rakić-Kostić Tijana M.

    2017-01-01

    Full Text Available This paper presents optimization of matrix-assisted laser desorption/ionization (MALDI time-of-flight (TOF mass spectrometer (MS instrumental parameters for the analysis of chloro(2,2'',2"-terpyridinepalladium(II chloride dihydrate complex applying design of experiments methodology (DoE. This complex is of interest for potential use in the cancer therapy. DoE methodology was proved to succeed in optimization of many complex analytical problems. However, it has been poorly used for MALDI-TOF-MS optimization up to now. The theoretical mathematical relationships which explain the influence of important experimental factors (laser energy, grid voltage and number of laser shots on the selected responses (signal to noise – S/N ratio and the resolution – R of the leading peak is established. The optimal instrumental settings providing maximal S/N and R are identified and experimentally verified. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. 172052 and Grant no. 172011

  6. Optimal design and experimental analyses of a new micro-vibration control payload-platform

    Science.gov (United States)

    Sun, Xiaoqing; Yang, Bintang; Zhao, Long; Sun, Xiaofen

    2016-07-01

    This paper presents a new payload-platform, for precision devices, which possesses the capability of isolating the complex space micro-vibration in low frequency range below 5 Hz. The novel payload-platform equipped with smart material actuators is investigated and designed through optimization strategy based on the minimum energy loss rate, for the aim of achieving high drive efficiency and reducing the effect of the magnetic circuit nonlinearity. Then, the dynamic model of the driving element is established by using the Lagrange method and the performance of the designed payload-platform is further discussed through the combination of the controlled auto regressive moving average (CARMA) model with modified generalized prediction control (MGPC) algorithm. Finally, an experimental prototype is developed and tested. The experimental results demonstrate that the payload-platform has an impressive potential of micro-vibration isolation.

  7. A case study on robust optimal experimental design for model calibration of ω-Transaminase

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hauwermeiren, Daan; Ringborg, Rolf Hoffmeyer

    the experimental space. However, it is expected that more informative experiments can be designed to increase the confidence of the parameter estimates. Therefore, we apply Optimal Experimental Design (OED) to the calibrated model of Shin and Kim (1998). The total number of samples was retained to allow fair......” parameter values are not known before finishing the model calibration. However, it is important that the chosen parameter values are close to the real parameter values, otherwise the OED can possibly yield non-informative experiments. To counter this problem, one can use robust OED. The idea of robust OED......Proper calibration of models describing enzyme kinetics can be quite challenging. This is especially the case for more complex models like transaminase models (Shin and Kim, 1998). The latter fitted model parameters, but the confidence on the parameter estimation was not derived. Hence...

  8. Optimization of single-walled carbon nanotube solubility by noncovalent PEGylation using experimental design methods

    Directory of Open Access Journals (Sweden)

    Hadidi N

    2011-04-01

    Full Text Available Naghmeh Hadidi1, Farzad Kobarfard2, Nastaran Nafissi-Varcheh3, Reza Aboofazeli11Department of Pharmaceutics, 2Department of Pharmaceutical Chemistry, 3Department of Pharmaceutical Biotechnology, School of Pharmacy, Shaheed Beheshti University of Medical Sciences, Tehran, IranAbstract: In this study, noncovalent functionalization of single-walled carbon nanotubes (SWCNTs with phospholipid-polyethylene glycols (Pl-PEGs was performed to improve the solubility of SWCNTs in aqueous solution. Two kinds of PEG derivatives, ie, Pl-PEG 2000 and Pl-PEG 5000, were used for the PEGylation process. An experimental design technique (D-optimal design and second-order polynomial equations was applied to investigate the effect of variables on PEGylation and the solubility of SWCNTs. The type of PEG derivative was selected as a qualitative parameter, and the PEG/SWCNT weight ratio and sonication time were applied as quantitative variables for the experimental design. Optimization was performed for two responses, aqueous solubility and loading efficiency. The grafting of PEG to the carbon nanostructure was determined by thermogravimetric analysis, Raman spectroscopy, and scanning electron microscopy. Aqueous solubility and loading efficiency were determined by ultraviolet-visible spectrophotometry and measurement of free amine groups, respectively. Results showed that Pl-PEGs were grafted onto SWCNTs. Aqueous solubility of 0.84 mg/mL and loading efficiency of nearly 98% were achieved for the prepared Pl-PEG 5000-SWCNT conjugates. Evaluation of functionalized SWCNTs showed that our noncovalent functionalization protocol could considerably increase aqueous solubility, which is an essential criterion in the design of a carbon nanotube-based drug delivery system and its biodistribution.Keywords: phospholipid-PEG, D-optimal design, loading efficiency, Raman spectroscopy, scanning electron microscopy, theromogravimetric analysis, carbon nanotubes

  9. DC microgrid power flow optimization by multi-layer supervision control. Design and experimental validation

    International Nuclear Information System (INIS)

    Sechilariu, Manuela; Wang, Bao Chao; Locment, Fabrice; Jouglet, Antoine

    2014-01-01

    Highlights: • DC microgrid (PV array, storage, power grid connection, DC load) with multi-layer supervision control. • Power balancing following power flow optimization while providing interface for smart grid communication. • Optimization under constraints: storage capability, grid power limitations, grid time-of-use pricing. • Experimental validation of DC microgrid power flow optimization by multi-layer supervision control. • DC microgrid able to perform peak shaving, to avoid undesired injection, and to make full use of locally energy. - Abstract: Urban areas have great potential for photovoltaic (PV) generation, however, direct PV power injection has limitations for high level PV penetration. It induces additional regulations in grid power balancing because of lacking abilities of responding to grid issues such as reducing grid peak consumption or avoiding undesired injections. The smart grid implementation, which is designed to meet these requirements, is facilitated by microgrids development. This paper presents a DC microgrid (PV array, storage, power grid connection, DC load) with multi-layer supervision control which handles instantaneous power balancing following the power flow optimization while providing interface for smart grid communication. The optimization takes into account forecast of PV power production and load power demand, while satisfying constraints such as storage capability, grid power limitations, grid time-of-use pricing and grid peak hour. Optimization, whose efficiency is related to the prediction accuracy, is carried out by mixed integer linear programming. Experimental results show that the proposed microgrid structure is able to control the power flow at near optimum cost and ensures self-correcting capability. It can respond to issues of performing peak shaving, avoiding undesired injection, and making full use of locally produced energy with respect to rigid element constraints

  10. Optimization of glibenclamide tablet composition through the combined use of differential scanning calorimetry and D-optimal mixture experimental design.

    Science.gov (United States)

    Mura, P; Furlanetto, S; Cirri, M; Maestrelli, F; Marras, A M; Pinzauti, S

    2005-02-07

    A systematic analysis of the influence of different proportions of excipients on the stability of a solid dosage form was carried out. In particular, a d-optimal mixture experimental design was applied for the evaluation of glibenclamide compatibility in tablet formulations, consisting of four classic excipients (natrosol as binding agent, stearic acid as lubricant, sorbitol as diluent and cross-linked polyvinylpyrrolidone as disintegrant). The goal was to find the mixture component proportions which correspond to the optimal drug melting parameters, i.e. its maximum stability, using differential scanning calorimetry (DSC) to quickly obtain information about possible interactions among the formulation components. The absolute value of the difference between the melting peak temperature of pure drug endotherm and that in each analysed mixture and the absolute value of the difference between the enthalpy of the pure glibenclamide melting peak and that of its melting peak in the different analyzed mixtures, were chosen as indexes of the drug-excipient interaction degree.

  11. Experimental and Numerical Design and Optimization of a Counter-Flow Heat Exchanger

    Directory of Open Access Journals (Sweden)

    Bahrami Salman

    2018-01-01

    Full Text Available A new inexpensive counter-flow heat exchanger has been designed and optimized for a vapor-compression cooling system in this research. The main aim is to experimentally and numerically evaluate the effect of an internal heat exchanger (IHX adaptation in an automotive air conditioning system. In this new design of IHX, the high-pressure liquid passes through the central channel and the low-pressure vapor flows in several parallel channels in the opposite direction. The experimental set-up has been made up of original components of the air conditioning system of a medium sedan car, specially designed and built to analyze vehicle A/C equipment under real operating conditions. The results show that this compact IHX may achieve up to 10% of the evaporator capacity while low pressure drop will be imposed on this refrigeration cycle. Also, they confirm considerable decrease of compressor power consumption (CPC, which is intensified at higher evaporator air flow. A significant improvement of the coefficient of performance (COP is achieved with the IHX employment too. The influence of operating conditions has been also discussed in this paper. Finally, numerical analyses have been briefly presented, which bring more details of the flow behavior and heat transfer phenomena, and help to determine the optimal arrangement of channels.

  12. Optimization of phototrophic hydrogen production by Rhodopseudomonas palustris PBUM001 via statistical experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Jamil, Zadariana [Department of Civil Engineering, Faculty of Engineering, University of Malaya (Malaysia); Faculty of Civil Engineering, Technology University of MARA (Malaysia); Mohamad Annuar, Mohamad Suffian; Vikineswary, S. [Institute of Biological Sciences, University of Malaya (Malaysia); Ibrahim, Shaliza [Department of Civil Engineering, Faculty of Engineering, University of Malaya (Malaysia)

    2009-09-15

    Phototrophic hydrogen production by indigenous purple non-sulfur bacteria, Rhodopseudomonas palustris PBUM001 from palm oil mill effluent (POME) was optimized using response surface methodology (RSM). The process parameters studied include inoculum sizes (% v/v), POME concentration (% v/v), light intensity (klux), agitation (rpm) and pH. The experimental data on cumulative hydrogen production and COD reduction were fitted into a quadratic polynomial model using response surface regression analysis. The path to optimal process conditions was determined by analyzing response surface three-dimensional surface plot and contour plot. Statistical analysis on experimental data collected following Box-Behnken design showed that 100% (v/v) POME concentration, 10% (v/v) inoculum size, light intensity at 4.0 klux, agitation rate at 250 rpm and pH of 6 were the best conditions. The maximum predicted cumulative hydrogen production and COD reduction obtained under these conditions was 1.05 ml H{sub 2}/ml POME and 31.71% respectively. Subsequent verification experiments at optimal process values gave the maximum yield of cumulative hydrogen at 0.66 {+-} 0.07 ml H{sub 2}/ml POME and COD reduction at 30.54 {+-} 9.85%. (author)

  13. Experimental design, modeling and optimization of polyplex formation between DNA oligonucleotides and branched polyethylenimine.

    Science.gov (United States)

    Clima, Lilia; Ursu, Elena L; Cojocaru, Corneliu; Rotaru, Alexandru; Barboiu, Mihail; Pinteala, Mariana

    2015-09-28

    The complexes formed by DNA and polycations have received great attention owing to their potential application in gene therapy. In this study, the binding efficiency between double-stranded oligonucleotides (dsDNA) and branched polyethylenimine (B-PEI) has been quantified by processing of the images captured from the gel electrophoresis assays. The central composite experimental design has been employed to investigate the effects of controllable factors on the binding efficiency. On the basis of experimental data and the response surface methodology, a multivariate regression model has been constructed and statistically validated. The model has enabled us to predict the binding efficiency depending on experimental factors, such as concentrations of dsDNA and B-PEI as well as the initial pH of solution. The optimization of the binding process has been performed using simplex and gradient methods. The optimal conditions determined for polyplex formation have yielded a maximal binding efficiency close to 100%. In order to reveal the mechanism of complex formation at the atomic-scale, a molecular dynamic simulation has been carried out. According to the computation results, B-PEI amine hydrogen atoms have interacted with oxygen atoms from dsDNA phosphate groups. These interactions have led to the formation of hydrogen bonds between macromolecules, stabilizing the polyplex structure.

  14. Online optimal experimental re-design in robotic parallel fed-batch cultivation facilities.

    Science.gov (United States)

    Cruz Bournazou, M N; Barz, T; Nickel, D B; Lopez Cárdenas, D C; Glauche, F; Knepper, A; Neubauer, P

    2017-03-01

    We present an integrated framework for the online optimal experimental re-design applied to parallel nonlinear dynamic processes that aims to precisely estimate the parameter set of macro kinetic growth models with minimal experimental effort. This provides a systematic solution for rapid validation of a specific model to new strains, mutants, or products. In biosciences, this is especially important as model identification is a long and laborious process which is continuing to limit the use of mathematical modeling in this field. The strength of this approach is demonstrated by fitting a macro-kinetic differential equation model for Escherichia coli fed-batch processes after 6 h of cultivation. The system includes two fully-automated liquid handling robots; one containing eight mini-bioreactors and another used for automated at-line analyses, which allows for the immediate use of the available data in the modeling environment. As a result, the experiment can be continually re-designed while the cultivations are running using the information generated by periodical parameter estimations. The advantages of an online re-computation of the optimal experiment are proven by a 50-fold lower average coefficient of variation on the parameter estimates compared to the sequential method (4.83% instead of 235.86%). The success obtained in such a complex system is a further step towards a more efficient computer aided bioprocess development. Biotechnol. Bioeng. 2017;114: 610-619. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  15. Fast Synthesis of Gibbsite Nanoplates and Process Optimization using Box-Behnken Experimental Design

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xin; Zhang, Xianwen; Graham, Trenton R.; Pearce, Carolyn I.; Mehdi, Beata L.; N' Diaye, Alpha T.; Kerisit, Sebastien N.; Browning, Nigel D.; Clark, Sue B.; Rosso, Kevin M.

    2017-10-26

    Developing the ability to synthesize compositionally and morphologically well-defined gibbsite particles at the nanoscale with high yield is an ongoing need that has not yet achieved the level of rational design. Here we report optimization of a clean inorganic synthesis route based on statistical experimental design examining the influence of Al(OH)3 gel precursor concentration, pH, and aging time at temperature. At 80 oC, the optimum synthesis conditions of gel concentration at 0.5 M, pH at 9.2, and time at 72 h maximized the reaction yield up to ~87%. The resulting gibbsite product is composed of highly uniform euhedral hexagonal nanoplates within a basal plane diameter range of 200-400 nm. The independent roles of key system variables in the growth mechanism are considered. On the basis of these optimized experimental conditions, the synthesis procedure, which is both cost-effective and environmentally friendly, has the potential for mass production scale-up of high quality gibbsite material for various fundamental research and industrial applications.

  16. Optimization and evaluation of clarithromycin floating tablets using experimental mixture design.

    Science.gov (United States)

    Uğurlu, Timucin; Karaçiçek, Uğur; Rayaman, Erkan

    2014-01-01

    The purpose of the study was to prepare and evaluate clarithromycin (CLA) floating tablets using experimental mixture design for treatment of Helicobacter pylori provided by prolonged gastric residence time and controlled plasma level. Ten different formulations were generated based on different molecular weight of hypromellose (HPMC K100, K4M, K15M) by using simplex lattice design (a sub-class of mixture design) with Minitab 16 software. Sodium bicarbonate and anhydrous citric acid were used as gas generating agents. Tablets were prepared by wet granulation technique. All of the process variables were fixed. Results of cumulative drug release at 8th h (CDR 8th) were statistically analyzed to get optimized formulation (OF). Optimized formulation, which gave floating lag time lower than 15 s and total floating time more than 10 h, was analyzed and compared with target for CDR 8th (80%). A good agreement was shown between predicted and actual values of CDR 8th with a variation lower than 1%. The activity of clarithromycin contained optimizedformula against H. pylori were quantified using well diffusion agar assay. Diameters of inhibition zones vs. log10 clarithromycin concentrations were plotted in order to obtain a standard curve and clarithromycin activity.

  17. Optimal Experimental Design of Borehole Locations for Bayesian Inference of Past Ice Sheet Surface Temperatures

    Science.gov (United States)

    Davis, A. D.; Huan, X.; Heimbach, P.; Marzouk, Y.

    2017-12-01

    Borehole data are essential for calibrating ice sheet models. However, field expeditions for acquiring borehole data are often time-consuming, expensive, and dangerous. It is thus essential to plan the best sampling locations that maximize the value of data while minimizing costs and risks. We present an uncertainty quantification (UQ) workflow based on rigorous probability framework to achieve these objectives. First, we employ an optimal experimental design (OED) procedure to compute borehole locations that yield the highest expected information gain. We take into account practical considerations of location accessibility (e.g., proximity to research sites, terrain, and ice velocity may affect feasibility of drilling) and robustness (e.g., real-time constraints such as weather may force researchers to drill at sub-optimal locations near those originally planned), by incorporating a penalty reflecting accessibility as well as sensitivity to deviations from the optimal locations. Next, we extract vertical temperature profiles from these boreholes and formulate a Bayesian inverse problem to reconstruct past surface temperatures. Using a model of temperature advection/diffusion, the top boundary condition (corresponding to surface temperatures) is calibrated via efficient Markov chain Monte Carlo (MCMC). The overall procedure can then be iterated to choose new optimal borehole locations for the next expeditions.Through this work, we demonstrate powerful UQ methods for designing experiments, calibrating models, making predictions, and assessing sensitivity--all performed under an uncertain environment. We develop a theoretical framework as well as practical software within an intuitive workflow, and illustrate their usefulness for combining data and models for environmental and climate research.

  18. Optimization of poorly compactable drug tablets manufactured by direct compression using the mixture experimental design.

    Science.gov (United States)

    Martinello, Tiago; Kaneko, Telma Mary; Velasco, Maria Valéria Robles; Taqueda, Maria Elena Santos; Consiglieri, Vladi O

    2006-09-28

    The poor flowability and bad compressibility characteristics of paracetamol are well known. As a result, the production of paracetamol tablets is almost exclusively by wet granulation, a disadvantageous method when compared to direct compression. The development of a new tablet formulation is still based on a large number of experiments and often relies merely on the experience of the analyst. The purpose of this study was to apply experimental design methodology (DOE) to the development and optimization of tablet formulations containing high amounts of paracetamol (more than 70%) and manufactured by direct compression. Nineteen formulations, screened by DOE methodology, were produced with different proportions of Microcel 102, Kollydon VA 64, Flowlac, Kollydon CL 30, PEG 4000, Aerosil, and magnesium stearate. Tablet properties, except friability, were in accordance with the USP 28th ed. requirements. These results were used to generate plots for optimization, mainly for friability. The physical-chemical data found from the optimized formulation were very close to those from the regression analysis, demonstrating that the mixture project is a great tool for the research and development of new formulations.

  19. Optimization of primaquine diphosphate tablet formulation for controlled drug release using the mixture experimental design.

    Science.gov (United States)

    Duque, Marcelo Dutra; Kreidel, Rogério Nepomuceno; Taqueda, Maria Elena Santos; Baby, André Rolim; Kaneko, Telma Mary; Velasco, Maria Valéria Robles; Consiglieri, Vladi Olga

    2013-01-01

    A tablet formulation based on hydrophilic matrix with a controlled drug release was developed, and the effect of polymer concentrations on the release of primaquine diphosphate was evaluated. To achieve this purpose, a 20-run, four-factor with multiple constraints on the proportions of the components was employed to obtain tablet compositions. Drug release was determined by an in vitro dissolution study in phosphate buffer solution at pH 6.8. The polynomial fitted functions described the behavior of the mixture on simplex coordinate systems to study the effects of each factor (polymer) on tablet characteristics. Based on the response surface methodology, a tablet composition was optimized with the purpose of obtaining a primaquine diphosphate release closer to a zero order kinetic. This formulation released 85.22% of the drug for 8 h and its kinetic was studied regarding to Korsmeyer-Peppas model, (Adj-R(2) = 0.99295) which has confirmed that both diffusion and erosion were related to the mechanism of the drug release. The data from the optimized formulation were very close to the predictions from statistical analysis, demonstrating that mixture experimental design could be used to optimize primaquine diphosphate dissolution from hidroxypropylmethyl cellulose and polyethylene glycol matrix tablets.

  20. Multi-objective optimization design and experimental investigation of centrifugal fan performance

    Science.gov (United States)

    Zhang, Lei; Wang, Songling; Hu, Chenxing; Zhang, Qian

    2013-11-01

    Current studies of fan performance optimization mainly focus on two aspects: one is to improve the blade profile, and another is only to consider the influence of single impeller structural parameter on fan performance. However, there are few studies on the comprehensive effect of the key parameters such as blade number, exit stagger angle of blade and the impeller outlet width on the fan performance. The G4-73 backward centrifugal fan widely used in power plants is selected as the research object. Based on orthogonal design and BP neural network, a model for predicting the centrifugal fan performance parameters is established, and the maximum relative errors of the total pressure and efficiency are 0.974% and 0.333%, respectively. Multi-objective optimization of total pressure and efficiency of the fan is conducted with genetic algorithm, and the optimum combination of impeller structural parameters is proposed. The optimized parameters of blade number, exit stagger angle of blade and the impeller outlet width are seperately 14, 43.9°, and 21 cm. The experiments on centrifugal fan performance and noise are conducted before and after the installation of the new impeller. The experimental results show that with the new impeller, the total pressure of fan increases significantly in total range of the flow rate, and the fan efficiency is improved when the relative flow is above 75%, also the high efficiency area is broadened. Additionally, in 65% -100% relative flow, the fan noise is reduced. Under the design operating condition, total pressure and efficiency of the fan are improved by 6.91% and 0.5%, respectively. This research sheds light on the considering of comprehensive effect of impeller structrual parameters on fan performance, and a new impeller can be designed to satisfy the engineering demand such as energy-saving, noise reduction or solving air pressure insufficiency for power plants.

  1. Application of D-optimal experimental design method to optimize the formulation of O/W cosmetic emulsions.

    Science.gov (United States)

    Djuris, J; Vasiljevic, D; Jokic, S; Ibric, S

    2014-02-01

    This study investigates the application of D-optimal mixture experimental design in optimization of O/W cosmetic emulsions. Cetearyl glucoside was used as a natural, biodegradable non-ionic emulsifier in the relatively low concentration (1%), and the mixture of co-emulsifiers (stearic acid, cetyl alcohol, stearyl alcohol and glyceryl stearate) was used to stabilize the formulations. To determine the optimal composition of co-emulsifiers mixture, D-optimal mixture experimental design was used. Prepared emulsions were characterized with rheological measurements, centrifugation test, specific conductivity and pH value measurements. All prepared samples appeared as white and homogenous creams, except for one homogenous and viscous lotion co-stabilized by stearic acid alone. Centrifugation testing revealed some phase separation only in the case of sample co-stabilized using glyceryl stearate alone. The obtained pH values indicated that all samples expressed mild acid value acceptable for cosmetic preparations. Specific conductivity values are attributed to the multiple phases O/W emulsions with high percentages of fixed water. Results of the rheological measurements have shown that the investigated samples exhibited non-Newtonian thixotropic behaviour. To determine the influence of each of the co-emulsifiers on emulsions properties, the obtained results were evaluated by the means of statistical analysis (ANOVA test). On the basis of comparison of statistical parameters for each of the studied responses, mixture reduced quadratic model was selected over the linear model implying that interactions between co-emulsifiers play the significant role in overall influence of co-emulsifiers on emulsions properties. Glyceryl stearate was found to be the dominant co-emulsifier affecting emulsions properties. Interactions between the glyceryl stearate and other co-emulsifiers were also found to significantly influence emulsions properties. These findings are especially important

  2. Experimental Optimization In Polymer BLEND Composite Preparation Based On Mix Level of Taguchi Robust Design

    International Nuclear Information System (INIS)

    Abdul Aziz Mohamed; Jaafar Abdullah; Dahlan Mohd; Rozaidi Rasid; Megat Harun AlRashid Megat Ahmad; Mahathir Mohamad; Mohd Hamzah Harun

    2012-01-01

    L 18 orthogonal array in mix level of Taguchi robust design method was carried out to optimize experimental conditions for the preparation of polymer blend composite. Tensile strength and neutron absorption of the composite were the properties of interest. Filler size, filler loading, ball mixing time and dispersion agent concentration were selected as parameters or factors which are expected to affect the composite properties. As a result of Taguchi analysis, filler loading was the most influencing parameter on the tensile strength and neutron absorption. The least influencing was ball-mixing time. The optimal conditions were determined by using mix-level Taguchi robust design method and a polymer composite with tensile strength of 6.33 MPa was successfully prepared. The composite was found to fully absorb thermal neutron flux of 1.04 x 10 5 n/ cm 2 / s with only 2 mm in thickness. In addition, the filler was also characterized by scanning electron microscopy (SEM) and elemental analysis (EDX). (Author)

  3. ENGINEERING DESIGN OPTIMIZATION OF HEEL TESTING EQUIPMENT IN THE EXPERIMENTAL VALIDATION OF SAFE WALKING

    Directory of Open Access Journals (Sweden)

    Cristiano Fragassa

    2017-06-01

    Full Text Available Experimental test methods for the evaluation of the resistance of heels of ladies' shoes in the case of impact loads are fully defined by International Organization for Standardization (ISO procedures that indicate all the conditions of experiment. A first Standard (ISO 19553 specifies the test method for determining the strength of the heels in the case of single impact. The result offers a valuation of the liability to fail under the sporadic heavy blows. A second Standard (ISO 19556 details a method for testing the capability of heels of women' shoes to survive to the repetition of small impacts provoked by normal walking. These Standards strictly define the features for two different testing devices (with specific materials, geometries, weights, etc. and all the experimental procedures to be followed during tests. On the contrary, this paper describes the technical solutions adopted to design one single experimental device able to perform impact testing of heels in both conditions. Joining the accuracy of mechanic movements with the speed of an electronic control system, a new and flexible equipment for the complete characterization of heels respect to (single or fatigue impacts was developed. Moreover a new level of performances in experimental validation of heel resistance was introduced by the versatility of the user-defined software control programs, able to encode every complex time-depending cycle of impact loads. Dynamic simulations permitted to investigate the impacts on heel in different conditions of testing, optimizing the machine design. The complexity of real stresses on shoes during an ordinary walk and in other common situations (as going up and downstairs was considered for a proper dimensioning.

  4. Optimization design study of an innovative divertor concept for future experimental tokamak-type fusion reactors

    International Nuclear Information System (INIS)

    Willem Janssens, Ir.; Crutzen, Y.; Farfaletti-Casali, F.; Matera, R.

    1991-01-01

    The design optimization study of an innovative divertor concept for future experimental tokamak-type fusion devices is both an answer to the actual problems encountered in the multilayer divertor proposals and an illustration of a rational modelling philosophy and optimization strategy for the development of a new divertor structure. Instead of using mechanical attachment or metallurgical bonding of the protective material to the heat sink as in most actual divertor concepts, the so-called brush divertor in this study uses an array of unidirectional fibers penetrating in both the protective armor and the underling composite heat sink. Although the approach is fully concentrated on the divertor performance, including both a description of its function from the theoretical point of view and an overview of the problems related to the materials choice and evaluation, both the approach followed in the numerical modelling and the judgment of the results are thought to be valid also for other applications. Therefore the spin-off of the study must be situated in both the technological progress towards a feasible divertor solution, which introduces no additional physical uncertainties, and in the general area of the thermo-mechanical finite-element modelling on both macro-and microscale. The brush divertor itself embodies the use, and thus the modelling, of advanced materials such as tailor-made metal matrix composites and dispersion strengthened metals, and is shown to offer large potential advantages, demanding however and experimental validation under working conditions. It is clearly indicated where the need originates for an integrated experimental program which must allow to verify the basic modelling assumptions in order to arrive at the use of numerical computation as a powerful and realistic tool of structural testing and life-time prediction

  5. Optimizing indomethacin-loaded chitosan nanoparticle size, encapsulation, and release using Box-Behnken experimental design.

    Science.gov (United States)

    Abul Kalam, Mohd; Khan, Abdul Arif; Khan, Shahanavaj; Almalik, Abdulaziz; Alshamsan, Aws

    2016-06-01

    Indomethacin chitosan nanoparticles (NPs) were developed by ionotropic gelation and optimized by concentrations of chitosan and tripolyphosphate (TPP) and stirring time by 3-factor 3-level Box-Behnken experimental design. Optimal concentration of chitosan (A) and TPP (B) were found 0.6mg/mL and 0.4mg/mL with 120min stirring time (C), with applied constraints of minimizing particle size (R1) and maximizing encapsulation efficiency (R2) and drug release (R3). Based on obtained 3D response surface plots, factors A, B and C were found to give synergistic effect on R1, while factor A has a negative impact on R2 and R3. Interaction of AB was negative on R1 and R2 but positive on R3. The factor AC was having synergistic effect on R1 and on R3, while the same combination had a negative effect on R2. The interaction BC was positive on the all responses. NPs were found in the size range of 321-675nm with zeta potentials (+25 to +32mV) after 6 months storage. Encapsulation, drug release, and content were in the range of 56-79%, 48-73% and 98-99%, respectively. In vitro drug release data were fitted in different kinetic models and pattern of drug release followed Higuchi-matrix type. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Experimental design and optimization of raloxifene hydrochloride loaded nanotransfersomes for transdermal application

    Directory of Open Access Journals (Sweden)

    Mahmood S

    2014-09-01

    Full Text Available Syed Mahmood, Muhammad Taher, Uttam Kumar Mandal Department of Pharmaceutical Technology, Kulliyyah of Pharmacy, International Islamic University Malaysia (IIUM, Pahang Darul Makmur, Malaysia Abstract: Raloxifene hydrochloride, a highly effective drug for the treatment of invasive breast cancer and osteoporosis in post-menopausal women, shows poor oral bioavailability of 2%. The aim of this study was to develop, statistically optimize, and characterize raloxifene hydrochloride-loaded transfersomes for transdermal delivery, in order to overcome the poor bioavailability issue with the drug. A response surface methodology experimental design was applied for the optimization of transfersomes, using Box-Behnken experimental design. Phospholipon® 90G, sodium deoxycholate, and sonication time, each at three levels, were selected as independent variables, while entrapment efficiency, vesicle size, and transdermal flux were identified as dependent variables. The formulation was characterized by surface morphology and shape, particle size, and zeta potential. Ex vivo transdermal flux was determined using a Hanson diffusion cell assembly, with rat skin as a barrier medium. Transfersomes from the optimized formulation were found to have spherical, unilamellar structures, with a ­homogeneous distribution and low polydispersity index (0.08. They had a particle size of 134±9 nM, with an entrapment efficiency of 91.00%±4.90%, and transdermal flux of 6.5±1.1 µg/cm2/hour. Raloxifene hydrochloride-loaded transfersomes proved significantly superior in terms of amount of drug permeated and deposited in the skin, with enhancement ratios of 6.25±1.50 and 9.25±2.40, respectively, when compared with drug-loaded conventional liposomes, and an ethanolic phosphate buffer saline. Differential scanning calorimetry study revealed a greater change in skin structure, compared with a control sample, during the ex vivo drug diffusion study. Further, confocal laser

  7. Electrodialytic desalination of brackish water: determination of optimal experimental parameters using full factorial design

    Science.gov (United States)

    Gmar, Soumaya; Helali, Nawel; Boubakri, Ali; Sayadi, Ilhem Ben Salah; Tlili, Mohamed; Amor, Mohamed Ben

    2017-12-01

    The aim of this work is to study the desalination of brackish water by electrodialysis (ED). A two level-three factor (23) full factorial design methodology was used to investigate the influence of different physicochemical parameters on the demineralization rate (DR) and the specific power consumption (SPC). Statistical design determines factors which have the important effects on ED performance and studies all interactions between the considered parameters. Three significant factors were used including applied potential, salt concentration and flow rate. The experimental results and statistical analysis show that applied potential and salt concentration are the main effect for DR as well as for SPC. The effect of interaction between applied potential and salt concentration was observed for SPC. A maximum value of 82.24% was obtained for DR under optimum conditions and the best value of SPC obtained was 5.64 Wh L-1. Empirical regression models were also obtained and used to predict the DR and the SPC profiles with satisfactory results. The process was applied for the treatment of real brackish water using the optimal parameters.

  8. Experimental Validation of Topology Optimization for RF MEMS Capacitive Switch Design

    DEFF Research Database (Denmark)

    Philippine, Mandy Axelle; Zareie, Hosein; Sigmund, Ole

    2013-01-01

    In this paper, we present 30 distinct RF MEMS capacitive switch designs that are the product of topology optimizations that control key mechanical properties such as stiffness, response to intrinsic stress gradients, and temperature sensitivity. The designs were evaluated with high-accuracy simul...

  9. Optimization Of Freeze-Dried Starter For Yogurt By Full Factorial Experimental Design

    Directory of Open Access Journals (Sweden)

    Chen He

    2015-12-01

    Full Text Available With the rapidly development of fermented milk product, it is significant for enhancing the performance of starter culture. This paper not only investigated the influence of anti-freeze factors and freeze-drying protective agents on viable count, freeze-drying survival rate and yield of Lactobacillus bulgaricus (LB and Streptococcus thermophilus (ST, but also optimized the bacteria proportion of freeze-dried starter culture for yogurt by full factorial experimental design. The results showed as following: the freeze-drying protective agents or anti-freeze factors could enhanced survival rate of LB and ST; the freeze-dried LB and ST powders containing both of anti-freeze factors and freeze-drying protective agents had higher viable count and freeze-drying survival rate that were 84.7% and 79.7% respectively; In terms of fermentation performance, the best group of freeze-dried starter for yogurt was the compound of LB3 and ST2.

  10. Optimization of Ultrasonic-Assisted Extraction of Cordycepin from Cordyceps militaris Using Orthogonal Experimental Design

    Directory of Open Access Journals (Sweden)

    Hsiu-Ju Wang

    2014-12-01

    Full Text Available This study reports on the optimization of the extraction conditions of cordycepin from Cordyceps militaris by using ultrasonication. For this purpose, the orthogonal experimental design was used to investigate the effects of factors on the ultrasonic-assisted extraction (UAE. Four factors: extraction time (min, ethanol concentration (%, extraction temperature (°C and extraction frequency (kHz, were studied. The results showed that the highest cordycepin yield of 7.04 mg/g (86.98% ± 0.23% was obtained with an extraction time of 60 min, ethanol concentration of 50%, extraction temperature of 65 °C and extraction frequency of 56 kHz. It was found that the cordycepin extraction yield increased with the effect of ultrasonication during the extraction process. Therefore, UAE can be used as an alternative to conventional immersion extraction with respect to the recovery of cordycepin from C. militaris, with the advantages of shorter extraction time and reduced solvent consumption.

  11. Time-oriented experimental design method to optimize hydrophilic matrix formulations with gelation kinetics and drug release profiles.

    Science.gov (United States)

    Shin, Sangmun; Choi, Du Hyung; Truong, Nguyen Khoa Viet; Kim, Nam Ah; Chu, Kyung Rok; Jeong, Seong Hoon

    2011-04-04

    A new experimental design methodology was developed by integrating the response surface methodology and the time series modeling. The major purposes were to identify significant factors in determining swelling and release rate from matrix tablets and their relative factor levels for optimizing the experimental responses. Properties of tablet swelling and drug release were assessed with ten factors and two default factors, a hydrophilic model drug (terazosin) and magnesium stearate, and compared with target values. The selected input control factors were arranged in a mixture simplex lattice design with 21 experimental runs. The obtained optimal settings for gelation were PEO, LH-11, Syloid, and Pharmacoat with weight ratios of 215.33 (88.50%), 5.68 (2.33%), 19.27 (7.92%), and 3.04 (1.25%), respectively. The optimal settings for drug release were PEO and citric acid with weight ratios of 191.99 (78.91%) and 51.32 (21.09%), respectively. Based on the results of matrix swelling and drug release, the optimal solutions, target values, and validation experiment results over time were similar and showed consistent patterns with very small biases. The experimental design methodology could be a very promising experimental design method to obtain maximum information with limited time and resources. It could also be very useful in formulation studies by providing a systematic and reliable screening method to characterize significant factors in the sustained release matrix tablet. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Optimization of scaffold design for bone tissue engineering: A computational and experimental study.

    Science.gov (United States)

    Dias, Marta R; Guedes, José M; Flanagan, Colleen L; Hollister, Scott J; Fernandes, Paulo R

    2014-04-01

    In bone tissue engineering, the scaffold has not only to allow the diffusion of cells, nutrients and oxygen but also provide adequate mechanical support. One way to ensure the scaffold has the right properties is to use computational tools to design such a scaffold coupled with additive manufacturing to build the scaffolds to the resulting optimized design specifications. In this study a topology optimization algorithm is proposed as a technique to design scaffolds that meet specific requirements for mass transport and mechanical load bearing. Several micro-structures obtained computationally are presented. Designed scaffolds were then built using selective laser sintering and the actual features of the fabricated scaffolds were measured and compared to the designed values. It was possible to obtain scaffolds with an internal geometry that reasonably matched the computational design (within 14% of porosity target, 40% for strut size and 55% for throat size in the building direction and 15% for strut size and 17% for throat size perpendicular to the building direction). These results support the use of these kind of computational algorithms to design optimized scaffolds with specific target properties and confirm the value of these techniques for bone tissue engineering. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. Monitoring and optimizing the co-composting of dewatered sludge: a mixture experimental design approach.

    Science.gov (United States)

    Komilis, Dimitrios; Evangelou, Alexandros; Voudrias, Evangelos

    2011-09-01

    The management of dewatered wastewater sludge is a major issue worldwide. Sludge disposal to landfills is not sustainable and thus alternative treatment techniques are being sought. The objective of this work was to determine optimal mixing ratios of dewatered sludge with other organic amendments in order to maximize the degradability of the mixtures during composting. This objective was achieved using mixture experimental design principles. An additional objective was to study the impact of the initial C/N ratio and moisture contents on the co-composting process of dewatered sludge. The composting process was monitored through measurements of O(2) uptake rates, CO(2) evolution, temperature profile and solids reduction. Eight (8) runs were performed in 100 L insulated air-tight bioreactors under a dynamic air flow regime. The initial mixtures were prepared using dewatered wastewater sludge, mixed paper wastes, food wastes, tree branches and sawdust at various initial C/N ratios and moisture contents. According to empirical modeling, mixtures of sludge and food waste mixtures at 1:1 ratio (ww, wet weight) maximize degradability. Structural amendments should be maintained below 30% to reach thermophilic temperatures. The initial C/N ratio and initial moisture content of the mixture were not found to influence the decomposition process. The bio C/bio N ratio started from around 10, for all runs, decreased during the middle of the process and increased to up to 20 at the end of the process. The solid carbon reduction of the mixtures without the branches ranged from 28% to 62%, whilst solid N reductions ranged from 30% to 63%. Respiratory quotients had a decreasing trend throughout the composting process. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Application of mixture experimental design in the formulation and optimization of matrix tablets containing carbomer and hydroxy-propylmethylcellulose.

    Science.gov (United States)

    Petrovic, Aleksandra; Cvetkovic, Nebojsa; Ibric, Svetlana; Trajkovic, Svetlana; Djuric, Zorica; Popadic, Dragica; Popovic, Radmila

    2009-12-01

    Using mixture experimental design, the effect of carbomer (Carbopol((R)) 971P NF) and hydroxypropylmethylcellulose (Methocel((R)) K100M or Methocel((R)) K4M) combination on the release profile and on the mechanism of drug liberation from matrix tablet was investigated. The numerical optimization procedure was also applied to establish and obtain formulation with desired drug release. The amount of TP released, release rate and mechanism varied with carbomer ratio in total matrix and HPMC viscosity. Increasing carbomer fractions led to a decrease in drug release. Anomalous diffusion was found in all matrices containing carbomer, while Case - II transport was predominant for tablet based on HPMC only. The predicted and obtained profiles for optimized formulations showed similarity. Those results indicate that Simplex Lattice Mixture experimental design and numerical optimization procedure can be applied during development to obtain sustained release matrix formulation with desired release profile.

  15. Design and experimental investigation of a decentralized GA-optimized neuro-fuzzy power system stabilizer

    Energy Technology Data Exchange (ETDEWEB)

    Talaat, Hossam E.A.; Abdennour, Adel; Al-Sulaiman, Abdulaziz A. [Electrical Engineering Department, College of Engineering, King Saud University, P.O. Box 800, Riyadh 11421 (Saudi Arabia)

    2010-09-15

    The aim of this research is the design and implementation of a decentralized power system stabilizer (PSS) capable of performing well for a wide range of variations in system parameters and/or loading conditions. The framework of the design is based on Fuzzy Logic Control (FLC). In particular, the neuro-fuzzy control rules are derived from training three classical PSSs; each is tuned using GA so as to perform optimally at one operating point. The effectiveness and robustness of the designed stabilizer, after implementing it to the laboratory model, is investigated. The results of real-time implementation prove that the proposed PSS offers a superior performance in comparison with the conventional stabilizer. (author)

  16. The Impact of Diagnostic Code Misclassification on Optimizing the Experimental Design of Genetic Association Studies

    Directory of Open Access Journals (Sweden)

    Steven J. Schrodi

    2017-01-01

    Full Text Available Diagnostic codes within electronic health record systems can vary widely in accuracy. It has been noted that the number of instances of a particular diagnostic code monotonically increases with the accuracy of disease phenotype classification. As a growing number of health system databases become linked with genomic data, it is critically important to understand the effect of this misclassification on the power of genetic association studies. Here, I investigate the impact of this diagnostic code misclassification on the power of genetic association studies with the aim to better inform experimental designs using health informatics data. The trade-off between (i reduced misclassification rates from utilizing additional instances of a diagnostic code per individual and (ii the resulting smaller sample size is explored, and general rules are presented to improve experimental designs.

  17. Optimizing laboratory animal stress paradigms: The H-H* experimental design.

    Science.gov (United States)

    McCarty, Richard

    2017-01-01

    Major advances in behavioral neuroscience have been facilitated by the development of consistent and highly reproducible experimental paradigms that have been widely adopted. In contrast, many different experimental approaches have been employed to expose laboratory mice and rats to acute versus chronic intermittent stress. An argument is advanced in this review that more consistent approaches to the design of chronic intermittent stress experiments would provide greater reproducibility of results across laboratories and greater reliability relating to various neural, endocrine, immune, genetic, and behavioral adaptations. As an example, the H-H* experimental design incorporates control, homotypic (H), and heterotypic (H*) groups and allows for comparisons across groups, where each animal is exposed to the same stressor, but that stressor has vastly different biological and behavioral effects depending upon each animal's prior stress history. Implementation of the H-H* experimental paradigm makes possible a delineation of transcriptional changes and neural, endocrine, and immune pathways that are activated in precisely defined stressor contexts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Optimizing the taste-masked formulation of acetaminophen using sodium caseinate and lecithin by experimental design.

    Science.gov (United States)

    Hoang Thi, Thanh Huong; Lemdani, Mohamed; Flament, Marie-Pierre

    2013-09-10

    In a previous study of ours, the association of sodium caseinate and lecithin was demonstrated to be promising for masking the bitterness of acetaminophen via drug encapsulation. The encapsulating mechanisms were suggested to be based on the segregation of multicomponent droplets occurring during spray-drying. The spray-dried particles delayed the drug release within the mouth during the early time upon administration and hence masked the bitterness. Indeed, taste-masking is achieved if, within the frame of 1-2 min, drug substance is either not released or the released amount is below the human threshold for identifying its bad taste. The aim of this work was (i) to evaluate the effect of various processing and formulation parameters on the taste-masking efficiency and (ii) to determine the optimal formulation for optimal taste-masking effect. Four investigated input variables included inlet temperature (X1), spray flow (X2), sodium caseinate amount (X3) and lecithin amount (X4). The percentage of drug release amount during the first 2 min was considered as the response variable (Y). A 2(4)-full factorial design was applied and allowed screening for the most influential variables i.e. sodium caseinate amount and lecithin amount. Optimizing these two variables was therefore conducted by a simplex approach. The SEM and DSC results of spray-dried powder prepared under optimal conditions showed that drug seemed to be well encapsulated. The drug release during the first 2 min significantly decreased, 7-fold less than the unmasked drug particles. Therefore, the optimal formulation that performed the best taste-masking effect was successfully achieved. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Formulation and Optimization of Eudragit RS PO-Tenofovir Nanocarriers Using Box-Behnken Experimental Design

    Directory of Open Access Journals (Sweden)

    Kefilwe Matlhola

    2015-01-01

    Full Text Available The objective of present study was to develop an optimized polymeric nanoparticle system for the antiretroviral drug tenofovir. A modified nanoprecipitation method was used to prepare Eudragit RS PO nanoparticles of the drug. The effect of amount of polymer, surfactant concentration, and sonication time on particle size, particle distribution, encapsulation efficiency (EE, and zeta potential were assessed and optimized utilizing a three-factor, three-level Box-Behnken Design (BBD of experiment. Fifteen formulations of nanoparticles were prepared as per BBD and evaluated for particle size, polydispersity index (PDI, EE, and zeta potential. The results showed that the measured mean particle sizes were in the range of 233 to 499 nm, PDI ranged from 0.094 to 0.153, average zeta potential ranged from −19.9 to −45.8 mV, and EE ranged between 98 and 99%. The optimized formulation was characterized for in vitro drug release and structural characterization. The mean particle size of this formulation was 233 nm with a PDI of 0.0107. It had a high EE of 98% and average zeta potential of −35 mV, an indication of particle stability. The FTIR showed some noncovalent interactions between the drug and polymer but a sustained release was observed in vitro for up to 80 hours.

  20. Design, experimental investigation and multi-objective optimization of a small-scale radial compressor for heat pump applications

    Energy Technology Data Exchange (ETDEWEB)

    Schiffmann, J. [Fischer Engineering Solutions AG, Birkenweg 3, CH-3360 Herzogenbuchsee (Switzerland); Favrat, D. [Ecole Polytechnique Federale de Lausanne, EPFL STI IGM LENI, Station 9, CH-1015 Lausanne (Switzerland)

    2010-01-15

    The main driver for small scale turbomachinery in domestic heat pumps is the potential for reaching higher efficiencies than volumetric compressors currently used and the potential for making the compressor oil-free, bearing a considerable advantage in the design of advanced multi-stage heat pump cycles. An appropriate turbocompressor for driving domestic heat pumps with a high temperature lift requires the ability to operate on a wide range of pressure ratios and mass flows, confronting the designer with the necessity of a compromise between range and efficiency. The present publication shows a possible way to deal with that difficulty, by coupling an appropriate modeling tool to a multi-objective optimizer. The optimizer manages to fit the compressor design into the possible specifications field while keeping the high efficiency on a wide operational range. The 1D-tool used for the compressor stage modeling has been validated by experimentally testing an initial impeller design. The excellent experimental results, the agreement with the model and the linking of the model to a multi-objective optimizer will allow to design radial compressor stages managing to fit the wide operational range of domestic heat pumps while keeping the high efficiency level. (author)

  1. DESIGN OPTIMIZATION AND EXPERIMENTAL STUDY ON THE BLOWER FOR FLUFFS COLLECTION SYSTEM

    Directory of Open Access Journals (Sweden)

    C. N. JAYAPRAGASAN

    2017-05-01

    Full Text Available Centrifugal fans play an important role in the fluffs collection system for industrial cleaner. Therefore it has become necessary to study on the parameters which influences the performance of the blower. Parameters chosen for optimization are - fan outer diameter, number of blades and fan blade angle. Taguchi’s orthogonal array method helps to find out the optimum number of cases and the modelling has been carried out using SOLIDWORKS. ICEM CFD is used for meshing the blowers and analysed using FLUENT. In this study, analytical results are compared with experimental values. ANOVA is used to find out the percentage contribution of parameters on the output. Using Minitab software the optimum combination is identified. The result shows that the optimum combinations are 190 mm outer diameter, 80° blade angle and 8 numbers of blades.

  2. Optimal design of disc-type magneto-rheological brake for mid-sized motorcycle: experimental evaluation

    Science.gov (United States)

    Sohn, Jung Woo; Jeon, Juncheol; Nguyen, Quoc Hung; Choi, Seung-Bok

    2015-08-01

    In this paper, a disc-type magneto-rheological (MR) brake is designed for a mid-sized motorcycle and its performance is experimentally evaluated. The proposed MR brake consists of an outer housing, a rotating disc immersed in MR fluid, and a copper wire coiled around a bobbin to generate a magnetic field. The structural configuration of the MR brake is first presented with consideration of the installation space for the conventional hydraulic brake of a mid-sized motorcycle. The design parameters of the proposed MR brake are optimized to satisfy design requirements such as the braking torque, total mass of the MR brake, and cruising temperature caused by the magnetic-field friction of the MR fluid. In the optimization procedure, the braking torque is calculated based on the Herschel-Bulkley rheological model, which predicts MR fluid behavior well at high shear rate. An optimization tool based on finite element analysis is used to obtain the optimized dimensions of the MR brake. After manufacturing the MR brake, mechanical performances regarding the response time, braking torque and cruising temperature are experimentally evaluated.

  3. Experimental validation of a magnetorheological energy absorber design optimized for shock and impact loads

    International Nuclear Information System (INIS)

    Singh, Harinder J; Hu, Wei; Wereley, Norman M; Glass, William

    2014-01-01

    A linear stroke adaptive magnetorheological energy absorber (MREA) was designed, fabricated and tested for intense impact conditions with piston velocities up to 8 m s −1 . The performance of the MREA was characterized using dynamic range, which is defined as the ratio of maximum on-state MREA force to the off-state MREA force. Design optimization techniques were employed in order to maximize the dynamic range at high impact velocities such that MREA maintained good control authority. Geometrical parameters of the MREA were optimized by evaluating MREA performance on the basis of a Bingham-plastic analysis incorporating minor losses (BPM analysis). Computational fluid dynamics and magnetic FE analysis were conducted to verify the performance of passive and controllable MREA force, respectively. Subsequently, high-speed drop testing (0–4.5 m s −1 at 0 A) was conducted for quantitative comparison with the numerical simulations. Refinements to the nonlinear BPM analysis were carried out to improve prediction of MREA performance. (paper)

  4. Experimental validation of a magnetorheological energy absorber design optimized for shock and impact loads

    Science.gov (United States)

    Singh, Harinder J.; Hu, Wei; Wereley, Norman M.; Glass, William

    2014-12-01

    A linear stroke adaptive magnetorheological energy absorber (MREA) was designed, fabricated and tested for intense impact conditions with piston velocities up to 8 m s-1. The performance of the MREA was characterized using dynamic range, which is defined as the ratio of maximum on-state MREA force to the off-state MREA force. Design optimization techniques were employed in order to maximize the dynamic range at high impact velocities such that MREA maintained good control authority. Geometrical parameters of the MREA were optimized by evaluating MREA performance on the basis of a Bingham-plastic analysis incorporating minor losses (BPM analysis). Computational fluid dynamics and magnetic FE analysis were conducted to verify the performance of passive and controllable MREA force, respectively. Subsequently, high-speed drop testing (0-4.5 m s-1 at 0 A) was conducted for quantitative comparison with the numerical simulations. Refinements to the nonlinear BPM analysis were carried out to improve prediction of MREA performance.

  5. Optimization of the representativeness and transposition approach, for the neutronic design of experimental programs in critical mock-up

    International Nuclear Information System (INIS)

    Dos-Santos, N.

    2013-01-01

    The work performed during this thesis focused on uncertainty propagation (nuclear data, technological uncertainties, calculation biases,...) on integral parameters, and the development of a novel approach enabling to reduce this uncertainty a priori directly from the design phase of a new experimental program. This approach is based on a multi-parameter multi-criteria extension of representativeness and transposition theories. The first part of this PhD work covers an optimization study of sensitivity and uncertainty calculation schemes to different modeling scales (cell, assembly and whole core) for LWRs and FBRs. A degraded scheme, based on standard and generalized perturbation theories, has been validated for the calculation of uncertainty propagation to various integral quantities of interest. It demonstrated the good a posteriori representativeness of the EPICURE experiment for the validation of mixed UOX-MOX loadings, as the importance of some nuclear data in the power tilt phenomenon in large LWR cores. The second part of this work was devoted to methods and tools development for the optimized design of experimental programs in ZPRs. Those methods are based on multi-parameters representativeness using simultaneously various quantities of interest. Finally, an original study has been conducted on the rigorous estimation of correlations between experimental programs in the transposition process. The coupling of experimental correlations and multi-parametric representativeness approach enables to efficiently design new programs, able to answer additional qualification requirements on calculation tools. (author) [fr

  6. A novel experimental design method to optimize hydrophilic matrix formulations with drug release profiles and mechanical properties.

    Science.gov (United States)

    Choi, Du Hyung; Lim, Jun Yeul; Shin, Sangmun; Choi, Won Jun; Jeong, Seong Hoon; Lee, Sangkil

    2014-10-01

    To investigate the effects of hydrophilic polymers on the matrix system, an experimental design method was developed to integrate response surface methodology and the time series modeling. Moreover, the relationships among polymers on the matrix system were studied with the evaluation of physical properties including water uptake, mass loss, diffusion, and gelling index. A mixture simplex lattice design was proposed while considering eight input control factors: Polyethylene glycol 6000 (x1 ), polyethylene oxide (PEO) N-10 (x2 ), PEO 301 (x3 ), PEO coagulant (x4 ), PEO 303 (x5 ), hydroxypropyl methylcellulose (HPMC) 100SR (x6 ), HPMC 4000SR (x7 ), and HPMC 10(5) SR (x8 ). With the modeling, optimal formulations were obtained depending on the four types of targets. The optimal formulations showed the four significant factors (x1 , x2 , x3 , and x8 ) and other four input factors (x4 , x5 , x6 , and x7 ) were not significant based on drug release profiles. Moreover, the optimization results were analyzed with estimated values, targets values, absolute biases, and relative biases based on observed times for the drug release rates with four different targets. The result showed that optimal solutions and target values had consistent patterns with small biases. On the basis of the physical properties of the optimal solutions, the type and ratio of the hydrophilic polymer and the relationships between polymers significantly influenced the physical properties of the system and drug release. This experimental design method is very useful in formulating a matrix system with optimal drug release. Moreover, it can distinctly confirm the relationships between excipients and the effects on the system with extensive and intensive evaluations. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  7. PVA-PEG physically cross-linked hydrogel film as a wound dressing: experimental design and optimization.

    Science.gov (United States)

    Ahmed, Afnan Sh; Mandal, Uttam Kumar; Taher, Muhammad; Susanti, Deny; Jaffri, Juliana Md

    2017-04-05

    The development of hydrogel films as wound healing dressings is of a great interest owing to their biological tissue-like nature. Polyvinyl alcohol/polyethylene glycol (PVA/PEG) hydrogels loaded with asiaticoside, a standardized rich fraction of Centella asiatica, were successfully developed using the freeze-thaw method. Response surface methodology with Box-Behnken experimental design was employed to optimize the hydrogels. The hydrogels were characterized and optimized by gel fraction, swelling behavior, water vapor transmission rate and mechanical strength. The formulation with 8% PVA, 5% PEG 400 and five consecutive freeze-thaw cycles was selected as the optimized formulation and was further characterized by its drug release, rheological study, morphology, cytotoxicity and microbial studies. The optimized formulation showed more than 90% drug release at 12 hours. The rheological properties exhibited that the formulation has viscoelastic behavior and remains stable upon storage. Cell culture studies confirmed the biocompatible nature of the optimized hydrogel formulation. In the microbial limit tests, the optimized hydrogel showed no microbial growth. The developed optimized PVA/PEG hydrogel using freeze-thaw method was swellable, elastic, safe, and it can be considered as a promising new wound dressing formulation.

  8. MAP: an iterative experimental design methodology for the optimization of catalytic search space structure modeling.

    Science.gov (United States)

    Baumes, Laurent A

    2006-01-01

    One of the main problems in high-throughput research for materials is still the design of experiments. At early stages of discovery programs, purely exploratory methodologies coupled with fast screening tools should be employed. This should lead to opportunities to find unexpected catalytic results and identify the "groups" of catalyst outputs, providing well-defined boundaries for future optimizations. However, very few new papers deal with strategies that guide exploratory studies. Mostly, traditional designs, homogeneous covering, or simple random samplings are exploited. Typical catalytic output distributions exhibit unbalanced datasets for which an efficient learning is hardly carried out, and interesting but rare classes are usually unrecognized. Here is suggested a new iterative algorithm for the characterization of the search space structure, working independently of learning processes. It enhances recognition rates by transferring catalysts to be screened from "performance-stable" space zones to "unsteady" ones which necessitate more experiments to be well-modeled. The evaluation of new algorithm attempts through benchmarks is compulsory due to the lack of past proofs about their efficiency. The method is detailed and thoroughly tested with mathematical functions exhibiting different levels of complexity. The strategy is not only empirically evaluated, the effect or efficiency of sampling on future Machine Learning performances is also quantified. The minimum sample size required by the algorithm for being statistically discriminated from simple random sampling is investigated.

  9. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design.

    Science.gov (United States)

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.

  10. Experimental design approach to the process parameter optimization for laser welding of martensitic stainless steels in a constrained overlap configuration

    Science.gov (United States)

    Khan, M. M. A.; Romoli, L.; Fiaschi, M.; Dini, G.; Sarri, F.

    2011-02-01

    This paper presents an experimental design approach to process parameter optimization for the laser welding of martensitic AISI 416 and AISI 440FSe stainless steels in a constrained overlap configuration in which outer shell was 0.55 mm thick. To determine the optimal laser-welding parameters, a set of mathematical models were developed relating welding parameters to each of the weld characteristics. These were validated both statistically and experimentally. The quality criteria set for the weld to determine optimal parameters were the minimization of weld width and the maximization of weld penetration depth, resistance length and shearing force. Laser power and welding speed in the range 855-930 W and 4.50-4.65 m/min, respectively, with a fiber diameter of 300 μm were identified as the optimal set of process parameters. However, the laser power and welding speed can be reduced to 800-840 W and increased to 4.75-5.37 m/min, respectively, to obtain stronger and better welds.

  11. An experimental analysis of design choices of multi-objective ant colony optimization algorithms

    OpenAIRE

    Lopez-Ibanez, Manuel; Stutzle, Thomas

    2012-01-01

    There have been several proposals on how to apply the ant colony optimization (ACO) metaheuristic to multi-objective combinatorial optimization problems (MOCOPs). This paper proposes a new formulation of these multi-objective ant colony optimization (MOACO) algorithms. This formulation is based on adding specific algorithm components for tackling multiple objectives to the basic ACO metaheuristic. Examples of these components are how to represent multiple objectives using pheromone and heuris...

  12. Development and Experimental Validation of a TRNSYS Dynamic Tool for Design and Energy Optimization of Ground Source Heat Pump Systems

    Directory of Open Access Journals (Sweden)

    Félix Ruiz-Calvo

    2017-09-01

    Full Text Available Ground source heat pump (GSHP systems stand for an efficient technology for renewable heating and cooling in buildings. To optimize not only the design but also the operation of the system, a complete dynamic model becomes a highly useful tool, since it allows testing any design modifications and different optimization strategies without actually implementing them at the experimental facility. Usually, this type of systems presents strong dynamic operating conditions. Therefore, the model should be able to predict not only the steady-state behavior of the system but also the short-term response. This paper presents a complete GSHP system model based on an experimental facility, located at Universitat Politècnica de València. The installation was constructed in the framework of a European collaborative project with title GeoCool. The model, developed in TRNSYS, has been validated against experimental data, and it accurately predicts both the short- and long-term behavior of the system.

  13. Improved microbial conversion of de-oiled Jatropha waste into biohydrogen via inoculum pretreatment: process optimization by experimental design approach

    Directory of Open Access Journals (Sweden)

    Gopalakrishnan Kumar

    2015-03-01

    Full Text Available In this study various pretreatment methods of sewage sludge inoculum and the statistical process optimization of de-oiled jatropha waste have been reported. Peak hydrogen production rate (HPR and hydrogen yield (HY of 0.36 L H2/L-d and 20 mL H2/g Volatile Solid (VS were obtained when heat shock pretreatment (95 oC, 30 min was employed. Afterwards, an experimental design was applied to find the optimal conditions for H2 production using heat-pretreated seed culture. The optimal substrate concentration, pH and temperature were determined by using response surface methodology as 205 g/L, 6.53 and 55.1 oC, respectively. Under these circumstances, the highest HPR of 1.36 L H2/L-d was predicted. Verification tests proved the reliability of the statistical approach. As a result of the heat pretreatment and fermentation optimization, a significant (~ 4 folds increase in HPR was achieved. PCR-DGGE results revealed that Clostridium sp. were majorly present under the optimal conditions.

  14. Design, implementation, and experimental validation of optimal power split control for hybrid electric trucks

    NARCIS (Netherlands)

    Keulen, T. van; Mullem, D. van; Jager, B. van; Kessels, J.T.B.A.; Steinbuch, M.

    2012-01-01

    Hybrid electric vehicles require an algorithm that controls the power split between the internal combustion engine and electric machine(s), and the opening and closing of the clutch. Optimal control theory is applied to derive a methodology for a real-time optimal-control-based power split

  15. Molecular identification of potential denitrifying bacteria and use of D-optimal mixture experimental design for the optimization of denitrification process.

    Science.gov (United States)

    Ben Taheur, Fadia; Fdhila, Kais; Elabed, Hamouda; Bouguerra, Amel; Kouidhi, Bochra; Bakhrouf, Amina; Chaieb, Kamel

    2016-04-01

    Three bacterial strains (TE1, TD3 and FB2) were isolated from date palm (degla), pistachio and barley. The presence of nitrate reductase (narG) and nitrite reductase (nirS and nirK) genes in the selected strains was detected by PCR technique. Molecular identification based on 16S rDNA sequencing method was applied to identify positive strains. In addition, the D-optimal mixture experimental design was used to optimize the optimal formulation of probiotic bacteria for denitrification process. Strains harboring denitrification genes were identified as: TE1, Agrococcus sp LN828197; TD3, Cronobacter sakazakii LN828198 and FB2, Pedicoccus pentosaceus LN828199. PCR results revealed that all strains carried the nirS gene. However only C. sakazakii LN828198 and Agrococcus sp LN828197 harbored the nirK and the narG genes respectively. Moreover, the studied bacteria were able to form biofilm on abiotic surfaces with different degree. Process optimization showed that the most significant reduction of nitrate was 100% with 14.98% of COD consumption and 5.57 mg/l nitrite accumulation. Meanwhile, the response values were optimized and showed that the most optimal combination was 78.79% of C. sakazakii LN828198 (curve value), 21.21% of P. pentosaceus LN828199 (curve value) and absence (0%) of Agrococcus sp LN828197 (curve value). Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Optimal Bayesian experimental design for priors of compact support with application to shock-tube experiments for combustion kinetics

    KAUST Repository

    Bisetti, Fabrizio

    2016-01-12

    The analysis of reactive systems in combustion science and technology relies on detailed models comprising many chemical reactions that describe the conversion of fuel and oxidizer into products and the formation of pollutants. Shock-tube experiments are a convenient setting for measuring the rate parameters of individual reactions. The temperature, pressure, and concentration of reactants are chosen to maximize the sensitivity of the measured quantities to the rate parameter of the target reaction. In this study, we optimize the experimental setup computationally by optimal experimental design (OED) in a Bayesian framework. We approximate the posterior probability density functions (pdf) using truncated Gaussian distributions in order to account for the bounded domain of the uniform prior pdf of the parameters. The underlying Gaussian distribution is obtained in the spirit of the Laplace method, more precisely, the mode is chosen as the maximum a posteriori (MAP) estimate, and the covariance is chosen as the negative inverse of the Hessian of the misfit function at the MAP estimate. The model related entities are obtained from a polynomial surrogate. The optimality, quantified by the information gain measures, can be estimated efficiently by a rejection sampling algorithm against the underlying Gaussian probability distribution, rather than against the true posterior. This approach offers a significant error reduction when the magnitude of the invariants of the posterior covariance are comparable to the size of the bounded domain of the prior. We demonstrate the accuracy and superior computational efficiency of our method for shock-tube experiments aiming to measure the model parameters of a key reaction which is part of the complex kinetic network describing the hydrocarbon oxidation. In the experiments, the initial temperature and fuel concentration are optimized with respect to the expected information gain in the estimation of the parameters of the target

  17. A Laplace method for under-determined Bayesian optimal experimental designs

    KAUST Repository

    Long, Quan

    2014-12-17

    In Long et al. (2013), a new method based on the Laplace approximation was developed to accelerate the estimation of the post-experimental expected information gains (Kullback–Leibler divergence) in model parameters and predictive quantities of interest in the Bayesian framework. A closed-form asymptotic approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general case where the model parameters cannot be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the Jacobian matrix of the data model with respect to the parameters, so that the information gain can be reduced to an integration against the marginal density of the transformed parameters that are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the posterior covariance matrix projected over the aforementioned orthogonal directions. To deal with the issue of dimensionality in a complex problem, we use either Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under-determined test cases. They include the designs of the scalar parameter in a one dimensional cubic polynomial function with two unidentifiable parameters forming a linear manifold, and the boundary source locations for impedance tomography in a square domain, where the unknown parameter is the conductivity, which is represented as a random field.

  18. Using Central Composite Experimental Design to Optimize the Degradation of Tylosin from Aqueous Solution by Photo-Fenton Reaction

    Directory of Open Access Journals (Sweden)

    Abd Elaziz Sarrai

    2016-05-01

    Full Text Available The feasibility of the application of the Photo-Fenton process in the treatment of aqueous solution contaminated by Tylosin antibiotic was evaluated. The Response Surface Methodology (RSM based on Central Composite Design (CCD was used to evaluate and optimize the effect of hydrogen peroxide, ferrous ion concentration and initial pH as independent variables on the total organic carbon (TOC removal as the response function. The interaction effects and optimal parameters were obtained by using MODDE software. The significance of the independent variables and their interactions was tested by means of analysis of variance (ANOVA with a 95% confidence level. Results show that the concentration of the ferrous ion and pH were the main parameters affecting TOC removal, while peroxide concentration had a slight effect on the reaction. The optimum operating conditions to achieve maximum TOC removal were determined. The model prediction for maximum TOC removal was compared to the experimental result at optimal operating conditions. A good agreement between the model prediction and experimental results confirms the soundness of the developed model.

  19. Design and experimental realization of an optimal scheme for teleportation of an n-qubit quantum state

    Science.gov (United States)

    Sisodia, Mitali; Shukla, Abhishek; Thapliyal, Kishore; Pathak, Anirban

    2017-12-01

    An explicit scheme (quantum circuit) is designed for the teleportation of an n-qubit quantum state. It is established that the proposed scheme requires an optimal amount of quantum resources, whereas larger amount of quantum resources have been used in a large number of recently reported teleportation schemes for the quantum states which can be viewed as special cases of the general n-qubit state considered here. A trade-off between our knowledge about the quantum state to be teleported and the amount of quantum resources required for the same is observed. A proof-of-principle experimental realization of the proposed scheme (for a 2-qubit state) is also performed using 5-qubit superconductivity-based IBM quantum computer. The experimental results show that the state has been teleported with high fidelity. Relevance of the proposed teleportation scheme has also been discussed in the context of controlled, bidirectional, and bidirectional controlled state teleportation.

  20. Optimization Design Method and Experimental Validation of a Solar PVT Cogeneration System Based on Building Energy Demand

    Directory of Open Access Journals (Sweden)

    Chao Zhou

    2017-08-01

    Full Text Available Photovoltaic-thermal (PVT technology refers to the integration of a photovoltaic (PV and a conventional solar thermal collector, representing the deep exploitation and utilization of solar energy. In this paper, we evaluate the performance of a solar PVT cogeneration system based on specific building energy demand using theoretical modeling and experimental study. Through calculation and simulation, the dynamic heating load and electricity load is obtained as the basis of the system design. An analytical expression for the connection of PVT collector array is derived by using basic energy balance equations and thermal models. Based on analytical results, an optimized design method was carried out for the system. In addition, the fuzzy control method of frequency conversion circulating water pumps and pipeline switching by electromagnetic valves is introduced in this paper to maintain the system at an optimal working point. Meanwhile, an experimental setup is established, which includes 36 PVT collectors with every 6 PVT collectors connected in series. The thermal energy generation, thermal efficiency, power generation and photovoltaic efficiency have been given in this paper. The results demonstrate that the demonstration solar PVT cogeneration system can meet the building energy demand in the daytime in the heating season.

  1. Experimental design and multiple response optimization. Using the desirability function in analytical methods development.

    Science.gov (United States)

    Candioti, Luciana Vera; De Zan, María M; Cámara, María S; Goicoechea, Héctor C

    2014-06-01

    A review about the application of response surface methodology (RSM) when several responses have to be simultaneously optimized in the field of analytical methods development is presented. Several critical issues like response transformation, multiple response optimization and modeling with least squares and artificial neural networks are discussed. Most recent analytical applications are presented in the context of analytLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, ArgentinaLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, Argentinaical methods development, especially in multiple response optimization procedures using the desirability function. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Conditions optimization for obtaining biodiesel from soybean oil using the mixture experimental design

    Directory of Open Access Journals (Sweden)

    Kelly Roberta Spacino

    2010-09-01

    Full Text Available The optimization of the yield of transesterification reaction to obtain the B100 biodiesel has been studied using sodium hydroxide, potassium hydroxide, methoxide and sodium ethoxide as catalysts. We applied a randomized simplex centroid mixture and the results of optimization indicate, when using methanol, a yield of 97.61% when using 30,77% NaOH and 69,23% of sodium methoxide and a yield of 89,32% when using only the sodium ethoxide whit ethanol. Chromatographic analysis indicated that the B100 biodiesel obtained is within the parameters established by Brazilian Legislation.

  3. A Laplace method for under-determined Bayesian optimal experimental designs

    KAUST Repository

    Long, Quan; Scavino, Marco; Tempone, Raul; Wang, Suojin

    2014-01-01

    In Long et al. (2013), a new method based on the Laplace approximation was developed to accelerate the estimation of the post-experimental expected information gains (Kullback–Leibler divergence) in model parameters and predictive quantities

  4. Design and performance characteristics of solar adsorption refrigeration system using parabolic trough collector: Experimental and statistical optimization technique

    International Nuclear Information System (INIS)

    Abu-Hamdeh, Nidal H.; Alnefaie, Khaled A.; Almitani, Khalid H.

    2013-01-01

    Highlights: • The successes of using olive waste/methanol as an adsorbent/adsorbate pair. • The experimental gross cycle coefficient of performance obtained was COP a = 0.75. • Optimization showed expanding adsorbent mass to a certain range increases the COP. • The statistical optimization led to optimum tank volume between 0.2 and 0.3 m 3 . • Increasing the collector area to a certain range increased the COP. - Abstract: The current work demonstrates a developed model of a solar adsorption refrigeration system with specific requirements and specifications. The recent scheme can be employed as a refrigerator and cooler unit suitable for remote areas. The unit runs through a parabolic trough solar collector (PTC) and uses olive waste as adsorbent with methanol as adsorbate. Cooling production, COP (coefficient of performance, and COP a (cycle gross coefficient of performance) were used to assess the system performance. The system’s design optimum parameters in this study were arrived to through statistical and experimental methods. The lowest temperature attained in the refrigerated space was 4 °C and the equivalent ambient temperature was 27 °C. The temperature started to decrease steadily at 20:30 – when the actual cooling started – until it reached 4 °C at 01:30 in the next day when it rose again. The highest COP a obtained was 0.75

  5. Optimization of sample preparation variables for wedelolactone from Eclipta alba using Box-Behnken experimental design followed by HPLC identification.

    Science.gov (United States)

    Patil, A A; Sachin, B S; Shinde, D B; Wakte, P S

    2013-07-01

    Coumestan wedelolactone is an important phytocomponent from Eclipta alba (L.) Hassk. It possesses diverse pharmacological activities, which have prompted the development of various extraction techniques and strategies for its better utilization. The aim of the present study is to develop and optimize supercritical carbon dioxide assisted sample preparation and HPLC identification of wedelolactone from E. alba (L.) Hassk. The response surface methodology was employed to study the optimization of sample preparation using supercritical carbon dioxide for wedelolactone from E. alba (L.) Hassk. The optimized sample preparation involves the investigation of quantitative effects of sample preparation parameters viz. operating pressure, temperature, modifier concentration and time on yield of wedelolactone using Box-Behnken design. The wedelolactone content was determined using validated HPLC methodology. The experimental data were fitted to second-order polynomial equation using multiple regression analysis and analyzed using the appropriate statistical method. By solving the regression equation and analyzing 3D plots, the optimum extraction conditions were found to be: extraction pressure, 25 MPa; temperature, 56 °C; modifier concentration, 9.44% and extraction time, 60 min. Optimum extraction conditions demonstrated wedelolactone yield of 15.37 ± 0.63 mg/100 g E. alba (L.) Hassk, which was in good agreement with the predicted values. Temperature and modifier concentration showed significant effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction method. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  6. Formulation, optimization and characterization of cationic polymeric nanoparticles of mast cell stabilizing agent using the Box-Behnken experimental design.

    Science.gov (United States)

    Gajra, Balaram; Patel, Ravi R; Dalwadi, Chintan

    2016-01-01

    The present research work was intended to develop and optimize sustained release of biodegradable chitosan nanoparticles (CSNPs) as delivery vehicle for sodium cromoglicate (SCG) using the circumscribed Box-Behnken experimental design (BBD) and evaluate its potential for oral permeability enhancement. The 3-factor, 3-level BBD was employed to investigate the combined influence of formulation variables on particle size and entrapment efficiency (%EE) of SCG-CSNPs prepared by ionic gelation method. The generated polynomial equation was validated and desirability function was utilized for optimization. Optimized SCG-CSNPs were evaluated for physicochemical, morphological, in-vitro characterizations and permeability enhancement potential by ex-vivo and uptake study using CLSM. SCG-CSNPs exhibited particle size of 200.4 ± 4.06 nm and %EE of 62.68 ± 2.4% with unimodal size distribution having cationic, spherical, smooth surface. Physicochemical and in-vitro characterization revealed existence of SCG in amorphous form inside CSNPs without interaction and showed sustained release profile. Ex-vivo and uptake study showed the permeability enhancement potential of CSNPs. The developed SCG-CSNPs can be considered as promising delivery strategy with respect to improved permeability and sustained drug release, proving importance of CSNPs as potential oral delivery system for treatment of allergic rhinitis. Hence, further studies should be performed for establishing the pharmacokinetic potential of the CSNPs.

  7. Optimization of a new polymeric chromium (III) membrane electrode based on methyl violet by using experimental design.

    Science.gov (United States)

    Kazemi, Sayed Yahya; Hamidi, Akram sadat; Asanjarani, Neda; Zolgharnein, Javad

    2010-06-15

    Plackett-Burman and Box-Behnken designs were applied as experimental design strategies to screen and optimize the influence of membrane ingredients on the electrode performance. A new poly(vinyl chloride) membrane sensor for Cr(III) based on methyl violet as an ionophore was planned. The major variables to find a model for achieving the best Nernstian slope as response were: PVC, plasticizers, methyl violet, KpClTPB, pH, conditioning time and internal solution concentration. Plackett-Burman design was used to screen the main factors and Box-Behnken response surface was led to find a model for optimizing the response. The optimized membrane electrode shows a Nernstian slope for chromium (III) ions over a wide linear range from 1.99x10(-6) to 3.16x10(-2)molL(-1) and a slope of 19.5+/-0.1mVdecade(-1) of activity. It would be successfully applied in the pH range from 3.5 to 6.5 with detection limit of 1.77x10(-6)molL(-1) (0.092mgL(-1)). The response time of the sensor is about 8s and the membrane can be used for more than 6 weeks without any deviation. The relative standard deviations (R.S.D.) for six replicate the measurements of 1.0x10(-4) and 1.0x10(-3)molL(-1) of Cr(III) were 3.2 and 3%, respectively. The electrode revealed comparatively good selectivity with respect to many cations including alkali earth, transition and heavy metal ions. The electrode was successfully used as an indicator in the potentiometric titration of Cr(III) with EDTA and was also applied to the direct determination chromium (III) content of spiked water and soil samples.

  8. Ethanol Production from Kitchen Garbage Using Zymomonas mobilis: Optimization of Parameters through Statistical Experimental Designs

    OpenAIRE

    Ma, H.; Wang, Q.; Gong, L.; Wang, X.; Yin, W.

    2008-01-01

    Plackett-Burman design was employed to screen 8 parameters for ethanol production from kitchen garbage by Zymomonas mobilis in simultaneous saccharification and fermentation. The parameters were divided into two parts, four kinds of enzymes and supplementation nutrients. The result indicated that the nutrient inside kitchen garbage could meet the requirement of ethanol production without supplementation, only protease and glucoamylase were needed to accelerate the ethanol production. The opti...

  9. Heterogeneity of the gut microbiome in mice: guidelines for optimizing experimental design

    Science.gov (United States)

    Laukens, Debby; Brinkman, Brigitta M.; Raes, Jeroen; De Vos, Martine; Vandenabeele, Peter

    2015-01-01

    Targeted manipulation of the gut flora is increasingly being recognized as a means to improve human health. Yet, the temporal dynamics and intra- and interindividual heterogeneity of the microbiome represent experimental limitations, especially in human cross-sectional studies. Therefore, rodent models represent an invaluable tool to study the host–microbiota interface. Progress in technical and computational tools to investigate the composition and function of the microbiome has opened a new era of research and we gradually begin to understand the parameters that influence variation of host-associated microbial communities. To isolate true effects from confounding factors, it is essential to include such parameters in model intervention studies. Also, explicit journal instructions to include essential information on animal experiments are mandatory. The purpose of this review is to summarize the factors that influence microbiota composition in mice and to provide guidelines to improve the reproducibility of animal experiments. PMID:26323480

  10. Optimization of critical factors to enhance polyhydroxyalkanoates (PHA) synthesis by mixed culture using Taguchi design of experimental methodology.

    Science.gov (United States)

    Venkata Mohan, S; Venkateswar Reddy, M

    2013-01-01

    Optimizing different factors is crucial for enhancement of mixed culture bioplastics (polyhydroxyalkanoates (PHA)) production. Design of experimental (DOE) methodology using Taguchi orthogonal array (OA) was applied to evaluate the influence and specific function of eight important factors (iron, glucose concentration, VFA concentration, VFA composition, nitrogen concentration, phosphorous concentration, pH, and microenvironment) on the bioplastics production. Three levels of factor (2(1) × 3(7)) variation were considered with symbolic arrays of experimental matrix [L(18)-18 experimental trails]. All the factors were assigned with three levels except iron concentration (2(1)). Among all the factors, microenvironment influenced bioplastics production substantially (contributing 81%), followed by pH (11%) and glucose concentration (2.5%). Validation experiments were performed with the obtained optimum conditions which resulted in improved PHA production. Good substrate degradation (as COD) of 68% was registered during PHA production. Dehydrogenase and phosphatase enzymatic activities were monitored during process operation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Optimization of β-casein stabilized nanoemulsions using experimental mixture design.

    Science.gov (United States)

    Maher, Patrick G; Fenelon, Mark A; Zhou, Yankun; Kamrul Haque, Md; Roos, Yrjö H

    2011-10-01

    The objective of this study was to determine the effect of changing viscosity and glass transition temperature in the continuous phase of nanoemulsion systems on subsequent stability. Formulations comprising of β-casein (2.5%, 5%, 7.5%, and 10% w/w), lactose (0% to 20% w/w), and trehalose (0% to 20% w/w) were generated from Design of Experiments (DOE) software and tested for glass transition temperature and onset of ice-melting temperature in maximally freeze-concentrated state (T(g) ' & T(m) '), and viscosity (μ). Increasing β-casein content resulted in significant (P mixture design was used to predict the optimum levels of lactose and trehalose required to attain the minimum and maximum T(g) ' and viscosity in solution at fixed protein contents. These mixtures were used to form the continuous phase of β-casein stabilized nanoemulsions (10% w/w sunflower oil) prepared by microfluidization at 70 MPa. Nanoemulsions were analyzed for T(g) ' & T(m) ', as well as viscosity, mean particle size, and stability. Increasing levels of β-casein (2.5% to 10% w/w) resulted in a significant (P mixture DOE was successfully used to predict glass transition and rheological properties for development of a continuous phase for use in nanoemulsions. © 2011 Institute of Food Technologists®

  12. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    Science.gov (United States)

    Humans are exposed to mixtures of environmental compounds. A regulatory assumption is that the mixtures of chemicals act in an additive manner. However, this assumption requires experimental validation. Traditional experimental designs (full factorial) require a large number of e...

  13. Optimization of polymeric triiodide membrane electrode based on clozapine-triiodide ion-pair using experimental design.

    Science.gov (United States)

    Farhadi, Khalil; Bahram, Morteza; Shokatynia, Donya; Salehiyan, Floria

    2008-07-15

    Central composite design (CCD) and response surface methodology (RSM) were developed as experimental strategies for modeling and optimization of the influence of some variables on the performance of a new PVC membrane triiodide ion-selective electrode. This triiodide sensor is based on triiodide-clozapine ion-pair complexation. PVC, plasticizers, ion-pair amounts and pH were investigated as four variables to build a model to achieve the best Nernstian slope (59.9 mV) as response. The electrode is prepared by incorporating the ion-exchanger in PVC matrix plasticized with 2-nitrophenyl octal ether, which is directly coated on the surface of a graphite electrode. The influence of foreign ions on the electrode performance was also investigated. The optimized membranes demonstrate Nernstian response for triiodide ions over a wide linear range from 5.0 x 10(-6) to 1.0 x 10(-2)mol L(-1) with a limit of detection 2.0 x 10(-6) mol L(-1) at 25 degrees C. The electrodes could be used over a wide pH range 4-8, and have the advantages of easy to prepare, good selectivity and fast response time, long lifetime (over 3 months) and small interferences from hydrogen ion. The proposed electrode was successfully used as indicator electrode in potentiometric titration of triiodide ions and ascorbic acid.

  14. Factorial experimental design for the optimization of catalytic degradation of malachite green dye in aqueous solution by Fenton process

    Directory of Open Access Journals (Sweden)

    A. Elhalil

    2016-09-01

    Full Text Available This work focuses on the optimization of the catalytic degradation of malachite green dye (MG by Fenton process “Fe2+/H2O2”. A 24 full factorial experimental design was used to evaluate the effects of four factors considered in the optimization of the oxidative process: concentration of MG (X1, concentration of Fe2+ (X2, concentration of H2O2 (X3 and temperature (X4. Individual and interaction effects of the factors that influenced the percentage of dye degradation were tested. The effect of interactions between the four parameters shows that there is a dependency between concentration of MG and concentration of Fe2+; concentration of Fe2+ and concentration of H2O2, expressed by the great values of the coefficient of interaction. The analysis of variance proved that, the concentration of MG, the concentration of Fe2+ and the concentration of H2O2 have an influence on the catalytic degradation while it is not the case for the temperature. In the optimization, the great dependence between observed and predicted degradation efficiency, the correlation coefficient for the model (R2=0.986 and the important value of F-ratio proved the validity of the model. The optimum degradation efficiency of malachite green was 93.83%, when the operational parameters were malachite green concentration of 10 mg/L, Fe2+ concentration of 10 mM, H2O2 concentration of 25.6 mM and temperature of 40 °C.

  15. I-optimal mixture designs

    OpenAIRE

    GOOS, Peter; JONES, Bradley; SYAFITRI, Utami

    2013-01-01

    In mixture experiments, the factors under study are proportions of the ingredients of a mixture. The special nature of the factors in a mixture experiment necessitates specific types of regression models, and specific types of experimental designs. Although mixture experiments usually are intended to predict the response(s) for all possible formulations of the mixture and to identify optimal proportions for each of the ingredients, little research has been done concerning their I-optimal desi...

  16. Optimal Market Design

    NARCIS (Netherlands)

    Boone, J.; Goeree, J.K.

    2010-01-01

    This paper introduces three methodological advances to study the optimal design of static and dynamic markets. First, we apply a mechanism design approach to characterize all incentive-compatible market equilibria. Second, we conduct a normative analysis, i.e. we evaluate alternative competition and

  17. ATHENA optimized coating design

    DEFF Research Database (Denmark)

    Ferreira, Desiree Della Monica; Christensen, Finn Erland; Jakobsen, Anders Clemen

    2012-01-01

    The optimization of coating design for the ATHENA mission si described and the possibility of increasing the telescope effective area in the range between 0.1 and 10 keV is investigated. An independent computation of the on-axis effective area based on the mirror design of ATHENA is performed...... in order to review the current coating baseline. The performance of several material combinations, considering a simple bi-layer, simple multilayer and linear graded multilayer coatings are tested and simulation of the mirror performance considering both the optimized coating design and the coating...

  18. "Real-time" disintegration analysis and D-optimal experimental design for the optimization of diclofenac sodium fast-dissolving films.

    Science.gov (United States)

    El-Malah, Yasser; Nazzal, Sami

    2013-01-01

    The objective of this work was to study the dissolution and mechanical properties of fast-dissolving films prepared from a tertiary mixture of pullulan, polyvinylpyrrolidone and hypromellose. Disintegration studies were performed in real-time by probe spectroscopy to detect the onset of film disintegration. Tensile strength and elastic modulus of the films were measured by texture analysis. Disintegration time of the films ranged from 21 to 105 seconds whereas their mechanical properties ranged from approximately 2 to 49 MPa for tensile strength and 1 to 21 MPa% for young's modulus. After generating polynomial models correlating the variables using a D-Optimal mixture design, an optimal formulation with desired responses was proposed by the statistical package. For validation, a new film formulation loaded with diclofenac sodium based on the optimized composition was prepared and tested for dissolution and tensile strength. Dissolution of the optimized film was found to commence almost immediately with 50% of the drug released within one minute. Tensile strength and young's modulus of the film were 11.21 MPa and 6, 78 MPa%, respectively. Real-time spectroscopy in conjunction with statistical design were shown to be very efficient for the optimization and development of non-conventional intraoral delivery system such as fast dissolving films.

  19. Wind farm design optimization

    Energy Technology Data Exchange (ETDEWEB)

    Carreau, Michel; Morgenroth, Michael; Belashov, Oleg; Mdimagh, Asma; Hertz, Alain; Marcotte, Odile

    2010-09-15

    Innovative numerical computer tools have been developed to streamline the estimation, the design process and to optimize the Wind Farm Design with respect to the overall return on investment. The optimization engine can find the collector system layout automatically which provide a powerful tool to quickly study various alternative taking into account more precisely various constraints or factors that previously would have been too costly to analyze in details with precision. Our Wind Farm Tools have evolved through numerous projects and created value for our clients yielding Wind Farm projects with projected higher returns.

  20. Spectrophotometric determination of fluoxetine by molecularly imprinted polypyrrole and optimization by experimental design, artificial neural network and genetic algorithm

    Science.gov (United States)

    Nezhadali, Azizollah; Motlagh, Maryam Omidvar; Sadeghzadeh, Samira

    2018-02-01

    A selective method based on molecularly imprinted polymer (MIP) solid-phase extraction (SPE) using UV-Vis spectrophotometry as a detection technique was developed for the determination of fluoxetine (FLU) in pharmaceutical and human serum samples. The MIPs were synthesized using pyrrole as a functional monomer in the presence of FLU as a template molecule. The factors that affecting the preparation and extraction ability of MIP such as amount of sorbent, initiator concentration, the amount of monomer to template ratio, uptake shaking rate, uptake time, washing buffer pH, take shaking rate, Taking time and polymerization time were considered for optimization. First a Plackett-Burman design (PBD) consists of 12 randomized runs were applied to determine the influence of each factor. The other optimization processes were performed using central composite design (CCD), artificial neural network (ANN) and genetic algorithm (GA). At optimal condition the calibration curve showed linearity over a concentration range of 10- 7-10- 8 M with a correlation coefficient (R2) of 0.9970. The limit of detection (LOD) for FLU was obtained 6.56 × 10- 9 M. The repeatability of the method was obtained 1.61%. The synthesized MIP sorbent showed a good selectivity and sensitivity toward FLU. The MIP/SPE method was used for the determination of FLU in pharmaceutical, serum and plasma samples, successfully.

  1. Spectrophotometric determination of fluoxetine by molecularly imprinted polypyrrole and optimization by experimental design, artificial neural network and genetic algorithm.

    Science.gov (United States)

    Nezhadali, Azizollah; Motlagh, Maryam Omidvar; Sadeghzadeh, Samira

    2018-02-05

    A selective method based on molecularly imprinted polymer (MIP) solid-phase extraction (SPE) using UV-Vis spectrophotometry as a detection technique was developed for the determination of fluoxetine (FLU) in pharmaceutical and human serum samples. The MIPs were synthesized using pyrrole as a functional monomer in the presence of FLU as a template molecule. The factors that affecting the preparation and extraction ability of MIP such as amount of sorbent, initiator concentration, the amount of monomer to template ratio, uptake shaking rate, uptake time, washing buffer pH, take shaking rate, Taking time and polymerization time were considered for optimization. First a Plackett-Burman design (PBD) consists of 12 randomized runs were applied to determine the influence of each factor. The other optimization processes were performed using central composite design (CCD), artificial neural network (ANN) and genetic algorithm (GA). At optimal condition the calibration curve showed linearity over a concentration range of 10 -7 -10 -8 M with a correlation coefficient (R 2 ) of 0.9970. The limit of detection (LOD) for FLU was obtained 6.56×10 -9 M. The repeatability of the method was obtained 1.61%. The synthesized MIP sorbent showed a good selectivity and sensitivity toward FLU. The MIP/SPE method was used for the determination of FLU in pharmaceutical, serum and plasma samples, successfully. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Optimization of photocatalytic degradation of methyl blue using silver ion doped titanium dioxide by combination of experimental design and response surface approach

    Energy Technology Data Exchange (ETDEWEB)

    Sahoo, C. [Environmental Engineering Division, Department of Civil Engineering, Indian Institute of Technology, Kharagpur, 721302 (India); Gupta, A.K., E-mail: agupta@civil.iitkgp.ernet.in [Environmental Engineering Division, Department of Civil Engineering, Indian Institute of Technology, Kharagpur, 721302 (India)

    2012-05-15

    Highlights: Black-Right-Pointing-Pointer Optimization of color removal and COD removal done by response surface approach. Black-Right-Pointing-Pointer The experiments were designed using Box-Behnken spherical design. Black-Right-Pointing-Pointer Two quadratic polynomial models were developed for the responses. Black-Right-Pointing-Pointer Single point numerical optimization was done considering three constraints. Black-Right-Pointing-Pointer Validation by performing the experiment under optimized conditions. - Abstract: Photocatalytic degradation of methyl blue (MYB) was studied using Ag{sup +} doped TiO{sub 2} under UV irradiation in a batch reactor. Catalytic dose, initial concentration of dye and pH of the reaction mixture were found to influence the degradation process most. The degradation was found to be effective in the range catalytic dose (0.5-1.5 g/L), initial dye concentration (25-100 ppm) and pH of reaction mixture (5-9). Using the three factors three levels Box-Behnken design of experiment technique 15 sets of experiments were designed considering the effective ranges of the influential parameters. The results of the experiments were fitted to two quadratic polynomial models developed using response surface methodology (RSM), representing functional relationship between the decolorization and mineralization of MYB and the experimental parameters. Design Expert software version 8.0.6.1 was used to optimize the effects of the experimental parameters on the responses. The optimum values of the parameters were dose of Ag{sup +} doped TiO{sub 2} 0.99 g/L, initial concentration of MYB 57.68 ppm and pH of reaction mixture 7.76. Under the optimal condition the predicted decolorization and mineralization rate of MYB were 95.97% and 80.33%, respectively. Regression analysis with R{sup 2} values >0.99 showed goodness of fit of the experimental results with predicted values.

  3. Optimization of photocatalytic degradation of methyl blue using silver ion doped titanium dioxide by combination of experimental design and response surface approach

    International Nuclear Information System (INIS)

    Sahoo, C.; Gupta, A.K.

    2012-01-01

    Highlights: ► Optimization of color removal and COD removal done by response surface approach. ► The experiments were designed using Box–Behnken spherical design. ► Two quadratic polynomial models were developed for the responses. ► Single point numerical optimization was done considering three constraints. ► Validation by performing the experiment under optimized conditions. - Abstract: Photocatalytic degradation of methyl blue (MYB) was studied using Ag + doped TiO 2 under UV irradiation in a batch reactor. Catalytic dose, initial concentration of dye and pH of the reaction mixture were found to influence the degradation process most. The degradation was found to be effective in the range catalytic dose (0.5–1.5 g/L), initial dye concentration (25–100 ppm) and pH of reaction mixture (5–9). Using the three factors three levels Box–Behnken design of experiment technique 15 sets of experiments were designed considering the effective ranges of the influential parameters. The results of the experiments were fitted to two quadratic polynomial models developed using response surface methodology (RSM), representing functional relationship between the decolorization and mineralization of MYB and the experimental parameters. Design Expert software version 8.0.6.1 was used to optimize the effects of the experimental parameters on the responses. The optimum values of the parameters were dose of Ag + doped TiO 2 0.99 g/L, initial concentration of MYB 57.68 ppm and pH of reaction mixture 7.76. Under the optimal condition the predicted decolorization and mineralization rate of MYB were 95.97% and 80.33%, respectively. Regression analysis with R 2 values >0.99 showed goodness of fit of the experimental results with predicted values.

  4. Optimization of high pressure machine decocting process for Dachengqi Tang using HPLC fingerprints combined with the Box–Behnken experimental design

    Directory of Open Access Journals (Sweden)

    Rui-Fang Xie

    2015-04-01

    Full Text Available Using Dachengqi Tang (DCQT as a model, high performance liquid chromatography (HPLC fingerprints were applied to optimize machine extracting process with the Box–Behnken experimental design. HPLC fingerprints were carried out to investigate the chemical ingredients of DCQT; synthetic weighing method based on analytic hierarchy process (AHP and criteria importance through intercriteria correlation (CRITIC was performed to calculate synthetic scores of fingerprints; using the mark ingredients contents and synthetic scores as indicators, the Box–Behnken design was carried out to optimize the process parameters of machine decocting process under high pressure for DCQT. Results of optimal process showed that the herb materials were soaked for 45 min and extracted with 9 folds volume of water in the decocting machine under the temperature of 140 °C till the pressure arrived at 0.25 MPa; then hot decoction was excreted to soak Dahuang and Mangxiao for 5 min. Finally, obtained solutions were mixed, filtrated and packed. It concluded that HPLC fingerprints combined with the Box–Behnken experimental design could be used to optimize extracting process of traditional Chinese medicine (TCM. Keywords: Dachengqi Tang, HPLC fingerprints, Box–Behnken design, Synthetic weighing method

  5. Airfoil design and optimization

    Energy Technology Data Exchange (ETDEWEB)

    Lutz, T. [Stuttgart Univ. (Germany). Inst. fuer Aerodynamik und Gasdynamik

    2001-07-01

    The aerodynamic efficiency of mildly swept wings is mainly influenced by the characteristics of the airfoil sections. The specific design of airfoils is therefore one of the classical tasks of aerodynamics. Since the airfoil characteristics are directly dependent on the inviscid pressure distribution the application of inverse calculation methods is obvious. The direct numerical airfoil optimization offers an alternative to the manual design and attracts increasing interest. (orig.)

  6. Bioremediation of chlorpyrifos contaminated soil by two phase bioslurry reactor: Processes evaluation and optimization by Taguchi's design of experimental (DOE) methodology.

    Science.gov (United States)

    Pant, Apourv; Rai, J P N

    2018-04-15

    Two phase bioreactor was constructed, designed and developed to evaluate the chlorpyrifos remediation. Six biotic and abiotic factors (substrate-loading rate, slurry phase pH, slurry phase dissolved oxygen (DO), soil water ratio, temperature and soil micro flora load) were evaluated by design of experimental (DOE) methodology employing Taguchi's orthogonal array (OA). The selected six factors were considered at two levels L-8 array (2^7, 15 experiments) in the experimental design. The optimum operating conditions obtained from the methodology showed enhanced chlorpyrifos degradation from 283.86µg/g to 955.364µg/g by overall 70.34% of enhancement. In the present study, with the help of few well defined experimental parameters a mathematical model was constructed to understand the complex bioremediation process and optimize the approximate parameters upto great accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Chronic behavior evaluation of a micro-machined neural implant with optimized design based on an experimentally derived model.

    Science.gov (United States)

    Andrei, Alexandru; Welkenhuysen, Marleen; Ameye, Lieveke; Nuttin, Bart; Eberle, Wolfgang

    2011-01-01

    Understanding the mechanical interactions between implants and the surrounding tissue is known to have an important role for improving the bio-compatibility of such devices. Using a recently developed model, a particular micro-machined neural implant design aiming the reduction of insertion forces dependence on the insertion speed was optimized. Implantations with 10 and 100 μm/s insertion speeds showed excellent agreement with the predicted behavior. Lesion size, gliosis (GFAP), inflammation (ED1) and neuronal cells density (NeuN) was evaluated after 6 week of chronic implantation showing no insertion speed dependence.

  8. Optimization of high pressure machine decocting process for Dachengqi Tang using HPLC fingerprints combined with the Box–Behnken experimental design

    OpenAIRE

    Xie, Rui-Fang; Shi, Zhi-Na; Li, Zhi-Cheng; Chen, Pei-Pei; Li, Yi-Min; Zhou, Xin

    2014-01-01

    Using Dachengqi Tang (DCQT) as a model, high performance liquid chromatography (HPLC) fingerprints were applied to optimize machine extracting process with the Box–Behnken experimental design. HPLC fingerprints were carried out to investigate the chemical ingredients of DCQT; synthetic weighing method based on analytic hierarchy process (AHP) and criteria importance through intercriteria correlation (CRITIC) was performed to calculate synthetic scores of fingerprints; using the mark ingredien...

  9. Experimental mixture design as a tool to optimize the growth of various Ganoderma species cultivated on media with different sugars

    Directory of Open Access Journals (Sweden)

    Yit Kheng Goh

    2016-01-01

    Full Text Available The influence of different medium components (glucose, sucrose, and fructose on the growth of different Ganoderma isolates and species was investigated using mixture design. Ten sugar combinations based on three simple sugars were generated with two different concentrations, namely 3.3% and 16.7%, which represented low and high sugar levels, respectively. The media were adjusted to either pH 5 or 8. Ganoderma isolates (two G. boninense from oil palm, one Ganoderma species from coconut palm, G. lingzhi, and G. australe from tower tree grew faster at pH 8. Ganoderma lingzhi proliferated at the slowest rate compared to all other tested Ganoderma species in all the media studied. However, G. boninense isolates grew the fastest. Different Ganoderma species were found to have different sugar preferences. This study illustrated that the mixture design can be used to determine the optimal combinations of sugar or other nutrient/chemical components of media for fungal growth.

  10. Optimization of high pressure machine decocting process for Dachengqi Tang using HPLC fingerprints combined with the Box-Behnken experimental design.

    Science.gov (United States)

    Xie, Rui-Fang; Shi, Zhi-Na; Li, Zhi-Cheng; Chen, Pei-Pei; Li, Yi-Min; Zhou, Xin

    2015-04-01

    Using Dachengqi Tang (DCQT) as a model, high performance liquid chromatography (HPLC) fingerprints were applied to optimize machine extracting process with the Box-Behnken experimental design. HPLC fingerprints were carried out to investigate the chemical ingredients of DCQT; synthetic weighing method based on analytic hierarchy process (AHP) and criteria importance through intercriteria correlation (CRITIC) was performed to calculate synthetic scores of fingerprints; using the mark ingredients contents and synthetic scores as indicators, the Box-Behnken design was carried out to optimize the process parameters of machine decocting process under high pressure for DCQT. Results of optimal process showed that the herb materials were soaked for 45 min and extracted with 9 folds volume of water in the decocting machine under the temperature of 140 °C till the pressure arrived at 0.25 MPa; then hot decoction was excreted to soak Dahuang and Mangxiao for 5 min. Finally, obtained solutions were mixed, filtrated and packed. It concluded that HPLC fingerprints combined with the Box-Behnken experimental design could be used to optimize extracting process of traditional Chinese medicine (TCM).

  11. Optimization of microwave-assisted extraction and supercritical fluid extraction of carbamate pesticides in soil by experimental design methodology.

    Science.gov (United States)

    Sun, Lei; Lee, Hian Kee

    2003-10-03

    Orthogonal array design (OAD) was applied for the first time to optimize microwave-assisted extraction (MAE) and supercritical fluid extraction (SFE) conditions for the analysis of four carbamates (propoxur, propham, methiocarb, chlorpropham) from soil. The theory and methodology of a new OA16 (4(4)) matrix derived from a OA16 (2(15)) matrix were developed during the MAE optimization. An analysis of variance technique was employed as the data analysis strategy in this study. Determinations of analytes were completed using high-performance liquid chromatography (HPLC) with UV detection. Four carbamates were successfully extracted from soil with recoveries ranging from 85 to 105% with good reproducibility (approximately 4.9% RSD) under the optimum MAE conditions: 30 ml methanol, 80 degrees C extraction temperature, and 6-min microwave heating. An OA8 (2(7)) matrix was employed for the SFE optimization. The average recoveries and RSD of the analytes from spiked soil by SFE were 92 and 5.5%, respectively except for propham (66.3+/-7.9%), under the following conditions: heating for 30 min at 60 degrees C under supercritical CO2 at 300 kg/cm2 modified with 10% (v/v) methanol. The composition of the supercritical fluid was demonstrated to be a crucial factor in the extraction. The addition of a small volume (10%) of methanol to CO2 greatly enhanced the recoveries of carbamates. A comparison of MAE with SFE was also conducted. The results indicated that >85% average recoveries were obtained by both optimized extraction techniques, and slightly higher recoveries of three carbamates (propoxur, propham and methiocarb) were achieved using MAE. SFE showed slightly higher recovery for chlorpropham (93 vs. 87% for MAE). The effects of time-aged soil on the extraction of analytes were examined and the results obtained by both methods were also compared.

  12. Optimization of Xylanase production from Penicillium sp.WX-Z1 by a two-step statistical strategy: Plackett-Burman and Box-Behnken experimental design.

    Science.gov (United States)

    Cui, Fengjie; Zhao, Liming

    2012-01-01

    The objective of the study was to optimize the nutrition sources in a culture medium for the production of xylanase from Penicillium sp.WX-Z1 using Plackett-Burman design and Box-Behnken design. The Plackett-Burman multifactorial design was first employed to screen the important nutrient sources in the medium for xylanase production by Penicillium sp.WX-Z1 and subsequent use of the response surface methodology (RSM) was further optimized for xylanase production by Box-Behnken design. The important nutrient sources in the culture medium, identified by the initial screening method of Placket-Burman, were wheat bran, yeast extract, NaNO(3), MgSO(4), and CaCl(2). The optimal amounts (in g/L) for maximum production of xylanase were: wheat bran, 32.8; yeast extract, 1.02; NaNO(3), 12.71; MgSO(4), 0.96; and CaCl(2), 1.04. Using this statistical experimental design, the xylanase production under optimal condition reached 46.50 U/mL and an increase in xylanase activity of 1.34-fold was obtained compared with the original medium for fermentation carried out in a 30-L bioreactor.

  13. Experimental Design Research

    DEFF Research Database (Denmark)

    This book presents a new, multidisciplinary perspective on and paradigm for integrative experimental design research. It addresses various perspectives on methods, analysis and overall research approach, and how they can be synthesized to advance understanding of design. It explores the foundations...... of experimental approaches and their utility in this domain, and brings together analytical approaches to promote an integrated understanding. The book also investigates where these approaches lead to and how they link design research more fully with other disciplines (e.g. psychology, cognition, sociology......, computer science, management). Above all, the book emphasizes the integrative nature of design research in terms of the methods, theories, and units of study—from the individual to the organizational level. Although this approach offers many advantages, it has inherently led to a situation in current...

  14. Mechanical Design Optimization Using Advanced Optimization Techniques

    CERN Document Server

    Rao, R Venkata

    2012-01-01

    Mechanical design includes an optimization process in which designers always consider objectives such as strength, deflection, weight, wear, corrosion, etc. depending on the requirements. However, design optimization for a complete mechanical assembly leads to a complicated objective function with a large number of design variables. It is a good practice to apply optimization techniques for individual components or intermediate assemblies than a complete assembly. Analytical or numerical methods for calculating the extreme values of a function may perform well in many practical cases, but may fail in more complex design situations. In real design problems, the number of design parameters can be very large and their influence on the value to be optimized (the goal function) can be very complicated, having nonlinear character. In these complex cases, advanced optimization algorithms offer solutions to the problems, because they find a solution near to the global optimum within reasonable time and computational ...

  15. Optimizing Within-Subject Experimental Designs for jICA of Multi-Channel ERP and fMRI

    Science.gov (United States)

    Mangalathu-Arumana, Jain; Liebenthal, Einat; Beardsley, Scott A.

    2018-01-01

    Joint independent component analysis (jICA) can be applied within subject for fusion of multi-channel event-related potentials (ERP) and functional magnetic resonance imaging (fMRI), to measure brain function at high spatiotemporal resolution (Mangalathu-Arumana et al., 2012). However, the impact of experimental design choices on jICA performance has not been systematically studied. Here, the sensitivity of jICA for recovering neural sources in individual data was evaluated as a function of imaging SNR, number of independent representations of the ERP/fMRI data, relationship between instantiations of the joint ERP/fMRI activity (linear, non-linear, uncoupled), and type of sources (varying parametrically and non-parametrically across representations of the data), using computer simulations. Neural sources were simulated with spatiotemporal and noise attributes derived from experimental data. The best performance, maximizing both cross-modal data fusion and the separation of brain sources, occurred with a moderate number of representations of the ERP/fMRI data (10–30), as in a mixed block/event related experimental design. Importantly, the type of relationship between instantiations of the ERP/fMRI activity, whether linear, non-linear or uncoupled, did not in itself impact jICA performance, and was accurately recovered in the common profiles (i.e., mixing coefficients). Thus, jICA provides an unbiased way to characterize the relationship between ERP and fMRI activity across brain regions, in individual data, rendering it potentially useful for characterizing pathological conditions in which neurovascular coupling is adversely affected. PMID:29410611

  16. Optimization of instrumental neutron activation analysis method by means of 2k experimental design technique aiming the validation of analytical procedures

    International Nuclear Information System (INIS)

    Petroni, Robson; Moreira, Edson G.

    2013-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2 k experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  17. Optimization of instrumental neutron activation analysis method by means of 2{sup k} experimental design technique aiming the validation of analytical procedures

    Energy Technology Data Exchange (ETDEWEB)

    Petroni, Robson; Moreira, Edson G., E-mail: rpetroni@ipen.br, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2{sup k} experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  18. Dynamic modeling, experimental evaluation, optimal design and control of integrated fuel cell system and hybrid energy systems for building demands

    Science.gov (United States)

    Nguyen, Gia Luong Huu

    obtained experimental data, the research studied the control of airflow to regulate the temperature of reactors within the fuel processor. The dynamic model provided a platform to test the dynamic response for different control gains. With sufficient sensing and appropriate control, a rapid response to maintain the temperature of the reactor despite an increase in power was possible. The third part of the research studied the use of a fuel cell in conjunction with photovoltaic panels, and energy storage to provide electricity for buildings. This research developed an optimization framework to determine the size of each device in the hybrid energy system to satisfy the electrical demands of buildings and yield the lowest cost. The advantage of having the fuel cell with photovoltaic and energy storage was the ability to operate the fuel cell at baseload at night, thus reducing the need for large battery systems to shift the solar power produced in the day to the night. In addition, the dispatchability of the fuel cell provided an extra degree of freedom necessary for unforeseen disturbances. An operation framework based on model predictive control showed that the method is suitable for optimizing the dispatch of the hybrid energy system.

  19. Improved Titanium Billet Inspection Sensitivity through Optimized Phased Array Design, Part II: Experimental Validation and Comparative Study with Multizone

    International Nuclear Information System (INIS)

    Hassan, W.; Vensel, F.; Knowles, B.; Lupien, V.

    2006-01-01

    The inspection of critical rotating components of aircraft engines has made important advances over the last decade. The development of Phased Array (PA) inspection capability for billet and forging materials used in the manufacturing of critical engine rotating components has been a priority for Honeywell Aerospace. The demonstration of improved PA inspection system sensitivity over what is currently used at the inspection houses is a critical step in the development of this technology and its introduction to the supply base as a production inspection. As described in Part I (in these proceedings), a new phased array transducer was designed and manufactured for optimal inspection of eight inch diameter Ti-6Al-4V billets. After confirming that the transducer was manufactured in accordance with the design specifications a validation study was conducted to assess the sensitivity improvement of the PAI over the current capability of Multi-zone (MZ) inspection. The results of this study confirm the significant (≅ 6 dB in FBH number sign sensitivity) improvement of the PAI sensitivity over that of MZI

  20. An experimental design approach for optimization of spectrophotometric method for estimation of cefixime trihydrate using ninhydrin as derivatizing reagent in bulk and pharmaceutical formulation

    Directory of Open Access Journals (Sweden)

    Yogita B. Wani

    2017-01-01

    Full Text Available The aim of the present work is to use experimental design to screen and optimize experimental variables for developing a spectrophotometric method for determining cefixime trihydrate content using ninhydrin as a derivatizing reagent. The method is based on the reaction of the amino group of cefixime with ninhydrin in an alkaline medium to form a yellow-colored derivative (λmax 436 nm. A two-level full factorial design was utilized to screen the effect of ninhydrin reagent concentration (X1, volume of ninhydrin reagent (X2, heating temperature (X3 and heating time (X4 on the formation of the cefixime–ninhydrin complex Y (absorbance. One way ANOVA and Pareto ranking analyses have shown that the ninhydrin reagent concentration (X1, volume of ninhydrin reagent (X2 and heating temperature (X3 were statistically significant factors (P < 0.05 affecting the formation of the cefixime–ninhydrin complex Y (absorbance. A Box-Behnken experimental design with response surface methodology was then utilized to evaluate the main, interaction and quadratic effects of these three factors on the selected response. With the help of a response surface plot and contour plot the optimum values of the selected factors were determined and used for further experiments. These values were a ninhydrin reagent concentration (X1 of 0.2% w/v, volume of ninhydrin reagent (X2 of 1 mL and heating temperature (X3 of 80 °C. The proposed method was validated according to the ICH Q2 (R1 method validation guidelines. The results of the present study have clearly shown that an experimental design concept may be effectively applied to the optimization of a spectrophotometric method for estimating the cefixime trihydrate content in bulk and pharmaceutical formulation with the least number of experimental runs possible.

  1. Optimization of photocatalytic degradation of methyl blue using silver ion doped titanium dioxide by combination of experimental design and response surface approach.

    Science.gov (United States)

    Sahoo, C; Gupta, A K

    2012-05-15

    Photocatalytic degradation of methyl blue (MYB) was studied using Ag(+) doped TiO(2) under UV irradiation in a batch reactor. Catalytic dose, initial concentration of dye and pH of the reaction mixture were found to influence the degradation process most. The degradation was found to be effective in the range catalytic dose (0.5-1.5g/L), initial dye concentration (25-100ppm) and pH of reaction mixture (5-9). Using the three factors three levels Box-Behnken design of experiment technique 15 sets of experiments were designed considering the effective ranges of the influential parameters. The results of the experiments were fitted to two quadratic polynomial models developed using response surface methodology (RSM), representing functional relationship between the decolorization and mineralization of MYB and the experimental parameters. Design Expert software version 8.0.6.1 was used to optimize the effects of the experimental parameters on the responses. The optimum values of the parameters were dose of Ag(+) doped TiO(2) 0.99g/L, initial concentration of MYB 57.68ppm and pH of reaction mixture 7.76. Under the optimal condition the predicted decolorization and mineralization rate of MYB were 95.97% and 80.33%, respectively. Regression analysis with R(2) values >0.99 showed goodness of fit of the experimental results with predicted values. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. OPTIMIZATION OF OPTIMIZATION OF DESIGN FORMULA DESIGN ...

    African Journals Online (AJOL)

    User

    Keywords: reinforced concrete slabs, flexure, m. 1. Introduction ntroduction ntroduction. An evaluation of the flexural resistance of reinforced concrete solid slabs with the optimu weight required for structural safety a economy and as stated in design codes. 5] is presented. It is a well-known fact by many design enginee.

  3. Application of Doehlert experimental design in the optimization of experimental variables for the Pseudozyma sp. (CCMB 306 and Pseudozyma sp. (CCMB 300 cell lysis

    Directory of Open Access Journals (Sweden)

    Amanda Reges de Sena

    2012-12-01

    Full Text Available This study aimed to verify the influence of pH and temperature on the lysis of yeast using experimental design. In this study, the enzymatic extract containing β-1,3-glucanase and chitinase, obtained from the micro-organism Moniliophthora perniciosa, was used. The experiment showed that the best conditions for lysis of Pseudozyma sp. (CCMB 306 and Pseudozyma sp. (CCMB 300 by lytic enzyme were pH 4.9 at 37 ºC and pH 3.9 at 26.7 ºC, respectively. The lytic enzyme may be used for obtaining various biotechnology products from yeast.

  4. Optimization of a pharmaceutical freeze-dried product and its process using an experimental design approach and innovative process analyzers.

    Science.gov (United States)

    De Beer, T R M; Wiggenhorn, M; Hawe, A; Kasper, J C; Almeida, A; Quinten, T; Friess, W; Winter, G; Vervaet, C; Remon, J P

    2011-02-15

    The aim of the present study was to examine the possibilities/advantages of using recently introduced in-line spectroscopic process analyzers (Raman, NIR and plasma emission spectroscopy), within well-designed experiments, for the optimization of a pharmaceutical formulation and its freeze-drying process. The formulation under investigation was a mannitol (crystalline bulking agent)-sucrose (lyo- and cryoprotector) excipient system. The effects of two formulation variables (mannitol/sucrose ratio and amount of NaCl) and three process variables (freezing rate, annealing temperature and secondary drying temperature) upon several critical process and product responses (onset and duration of ice crystallization, onset and duration of mannitol crystallization, duration of primary drying, residual moisture content and amount of mannitol hemi-hydrate in end product) were examined using a design of experiments (DOE) methodology. A 2-level fractional factorial design (2(5-1)=16 experiments+3 center points=19 experiments) was employed. All experiments were monitored in-line using Raman, NIR and plasma emission spectroscopy, which supply continuous process and product information during freeze-drying. Off-line X-ray powder diffraction analysis and Karl-Fisher titration were performed to determine the morphology and residual moisture content of the end product, respectively. In first instance, the results showed that - besides the previous described findings in De Beer et al., Anal. Chem. 81 (2009) 7639-7649 - Raman and NIR spectroscopy are able to monitor the product behavior throughout the complete annealing step during freeze-drying. The DOE approach allowed predicting the optimum combination of process and formulation parameters leading to the desired responses. Applying a mannitol/sucrose ratio of 4, without adding NaCl and processing the formulation without an annealing step, using a freezing rate of 0.9°C/min and a secondary drying temperature of 40°C resulted in

  5. Development, optimization, and in vitro characterization of dasatinib-loaded PEG functionalized chitosan capped gold nanoparticles using Box-Behnken experimental design.

    Science.gov (United States)

    Adena, Sandeep Kumar Reddy; Upadhyay, Mansi; Vardhan, Harsh; Mishra, Brahmeshwar

    2018-03-01

    The purpose of this research study was to develop, optimize, and characterize dasatinib loaded polyethylene glycol (PEG) stabilized chitosan capped gold nanoparticles (DSB-PEG-Ch-GNPs). Gold (III) chloride hydrate was reduced with chitosan and the resulting nanoparticles were coated with thiol-terminated PEG and loaded with dasatinib (DSB). Plackett-Burman design (PBD) followed by Box-Behnken experimental design (BBD) were employed to optimize the process parameters. Polynomial equations, contour, and 3D response surface plots were generated to relate the factors and responses. The optimized DSB-PEG-Ch-GNPs were characterized by FTIR, XRD, HR-SEM, EDX, TEM, SAED, AFM, DLS, and ZP. The results of the optimized DSB-PEG-Ch-GNPs showed particle size (PS) of 24.39 ± 1.82 nm, apparent drug content (ADC) of 72.06 ± 0.86%, and zeta potential (ZP) of -13.91 ± 1.21 mV. The responses observed and the predicted values of the optimized process were found to be close. The shape and surface morphology studies showed that the resulting DSB-PEG-Ch-GNPs were spherical and smooth. The stability and in vitro drug release studies confirmed that the optimized formulation was stable at different conditions of storage and exhibited a sustained drug release of the drug of up to 76% in 48 h and followed Korsmeyer-Peppas release kinetic model. A process for preparing gold nanoparticles using chitosan, anchoring PEG to the particle surface, and entrapping dasatinib in the chitosan-PEG surface corona was optimized.

  6. Optimization of the Extraction of the Volatile Fraction from Honey Samples by SPME-GC-MS, Experimental Design, and Multivariate Target Functions

    Directory of Open Access Journals (Sweden)

    Elisa Robotti

    2017-01-01

    Full Text Available Head space (HS solid phase microextraction (SPME followed by gas chromatography with mass spectrometry detection (GC-MS is the most widespread technique to study the volatile profile of honey samples. In this paper, the experimental SPME conditions were optimized by a multivariate strategy. Both sensitivity and repeatability were optimized by experimental design techniques considering three factors: extraction temperature (from 50°C to 70°C, time of exposition of the fiber (from 20 min to 60 min, and amount of salt added (from 0 to 27.50%. Each experiment was evaluated by Principal Component Analysis (PCA that allows to take into consideration all the analytes at the same time, preserving the information about their different characteristics. Optimal extraction conditions were identified independently for signal intensity (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w and repeatability (extraction temperature: 50°C; extraction time: 60 min; salt percentage: 27.50% w/w and a final global compromise (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w was also reached. Considerations about the choice of the best internal standards were also drawn. The whole optimized procedure was than applied to the analysis of a multiflower honey sample and more than 100 compounds were identified.

  7. OPTIMAL NETWORK TOPOLOGY DESIGN

    Science.gov (United States)

    Yuen, J. H.

    1994-01-01

    This program was developed as part of a research study on the topology design and performance analysis for the Space Station Information System (SSIS) network. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. It is intended that this new design technique consider all important performance measures explicitly and take into account the constraints due to various technical feasibilities. In the current program, technical constraints are taken care of by the user properly forming the starting set of candidate components (e.g. nonfeasible links are not included). As subsets are generated, they are tested to see if they form an acceptable network by checking that all requirements are satisfied. Thus the first acceptable subset encountered gives the cost-optimal topology satisfying all given constraints. The user must sort the set of "feasible" link elements in increasing order of their costs. The program prompts the user for the following information for each link: 1) cost, 2) connectivity (number of stations connected by the link), and 3) the stations connected by that link. Unless instructed to stop, the program generates all possible acceptable networks in increasing order of their total costs. The program is written only to generate topologies that are simply connected. Tests on reliability, delay, and other performance measures are discussed in the documentation, but have not been incorporated into the program. This program is written in PASCAL for interactive execution and has been implemented on an IBM PC series computer operating under PC DOS. The disk contains source code only. This program was developed in 1985.

  8. Optimization of the marinating conditions of cassava fish (Pseudotolithus sp.) fillet for Lanhouin production through application of Doehlert experimental design.

    Science.gov (United States)

    Kindossi, Janvier Mêlégnonfan; Anihouvi, Victor Bienvenu; Vieira-Dalodé, Générose; Akissoé, Noël Houédougbé; Hounhouigan, Djidjoho Joseph

    2016-03-01

    Lanhouin is a traditional fermented salted fish made from the spontaneous and uncontrolled fermentation of whole salted cassava fish (Pseudotolithus senegalensis) mainly produced in the coastal regions of West Africa. The combined effects of NaCl, citric acid concentration, and marination time on the physicochemical and microbiological characteristics of the fish fillet used for Lanhouin production were studied using a Doehlert experimental design with the objective of preserving its quality and safety. The marination time has significant effects on total viable and lactic acid bacteria counts, and NaCl content of the marinated fish fillet while the pH was significantly affected by citric acid concentration and marination duration with high regression coefficient R (2) of 0.83. The experiment showed that the best conditions for marination process of fish fillet were salt ratio 10 g/100 g, acid citric concentration 2.5 g/100 g, and marination time 6 h. These optimum marinating conditions obtained present the best quality of marinated flesh fish leading to the safety of the final fermented product. This pretreatment is necessary in Lanhouin production processes to ensure its safety quality.

  9. Formulation of cilostazol spherical agglomerates by crystallo-co-agglomeration technique and optimization using design of experimentation.

    Science.gov (United States)

    Deshkar, Sanjeevani Shekhar; Borde, Govind R; Kale, Rupali N; Waghmare, Balasaheb A; Thomas, Asha Biju

    2017-01-01

    Spherical agglomeration is one of the novel techniques for improvement of flow and dissolution properties of drugs. Cilostazol is a biopharmaceutics classification system Class II drug with poor solubility resulting in limited bioavailability. The present study aims at improving the solubility and dissolution of cilostazol by crystallo-co-agglomeration technique. Cilostazol agglomerates were prepared using various polymers with varying concentration of hydroxypropyl methylcellulose E 50 (HPMC E50), polyvinyl pyrrolidone K30 (PVP K30), and polyethylene glycol 6000. The influence of polymer concentration on spherical agglomerate formation was studied by 3 2 factorial design. Cilostazol agglomerates were evaluated for percent yield, mean particle size, drug content, aqueous solubility, and in vitro dissolution and further characterized by Fourier transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), differential scanning calorimetry (DSC), and X-ray diffraction (XRD). The agglomeration process resulted in optimized formulation, F3 with mean agglomerate size of 210.0 ± 0.56 μm, excellent flow properties, approximately 15-fold increase in solubility than pure cilostazol and complete drug release in 60 min. Process yield, agglomerate size, and drug release were affected by amount of PVP K 30 and HPMC E50. The presence of drug microcrystal was confirmed by SEM, whereas FTIR study indicated no chemical change. Increase in drug solubility was attributed to change of crystalline drug to amorphous form that is evident in DSC and XRD. Crystallo-co-agglomeration can be adopted as an important approach for increasing the solubility and dissolution of poorly soluble drug.

  10. Aplication of the statistical experimental design to optimize mine-impacted water (MIW) remediation using shrimp-shell.

    Science.gov (United States)

    Núñez-Gómez, Dámaris; Alves, Alcione Aparecida de Almeida; Lapolli, Flavio Rubens; Lobo-Recio, María A

    2017-01-01

    Mine-impacted water (MIW) is one of the most serious mining problems and has a high negative impact on water resources and aquatic life. The main characteristics of MIW are a low pH (between 2 and 4) and high concentrations of SO 4 2- and metal ions (Cd, Cu, Ni, Pb, Zn, Fe, Al, Cr, Mn, Mg, etc.), many of which are toxic to ecosystems and human life. Shrimp shell was selected as a MIW treatment agent because it is a low-cost metal-sorbent biopolymer with a high chitin content and contains calcium carbonate, an acid-neutralizing agent. To determine the best metal-removal conditions, a statistical study using statistical planning was carried out. Thus, the objective of this work was to identify the degree of influence and dependence of the shrimp-shell content for the removal of Fe, Al, Mn, Co, and Ni from MIW. In this study, a central composite rotational experimental design (CCRD) with a quadruplicate at the midpoint (2 2 ) was used to evaluate the joint influence of two formulation variables-agitation and the shrimp-shell content. The statistical results showed the significant influence (p < 0.05) of the agitation variable for Fe and Ni removal (linear and quadratic form, respectively) and of the shrimp-shell content variable for Mn (linear form), Al and Co (linear and quadratic form) removal. Analysis of variance (ANOVA) for Al, Co, and Ni removal showed that the model is valid at the 95% confidence interval and that no adjustment needed within the ranges evaluated of agitation (0-251.5 rpm) and shrimp-shell content (1.2-12.8 g L -1 ). The model required adjustments to the 90% and 75% confidence interval for Fe and Mn removal, respectively. In terms of efficiency in removing pollutants, it was possible to determine the best experimental values of the variables considered as 188 rpm and 9.36 g L -1 of shrimp-shells. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Optimal Design and Related Areas in Optimization and Statistics

    CERN Document Server

    Pronzato, Luc

    2009-01-01

    This edited volume, dedicated to Henry P. Wynn, reflects his broad range of research interests, focusing in particular on the applications of optimal design theory in optimization and statistics. It covers algorithms for constructing optimal experimental designs, general gradient-type algorithms for convex optimization, majorization and stochastic ordering, algebraic statistics, Bayesian networks and nonlinear regression. Written by leading specialists in the field, each chapter contains a survey of the existing literature along with substantial new material. This work will appeal to both the

  12. Experimental site and design

    Energy Technology Data Exchange (ETDEWEB)

    Guenette, C. C. [SINTEF Applied Cemistry, Trondheim (Norway)

    1999-08-01

    Design and site selection criteria for the Svalbard oil spill experiments are described. All three experimental sites have coarse and mixed sediment beaches of sand and pebble; within each site wave exposure is very similar; along-shore and across-shore sediment characteristics are also relatively homogeneous. Tidal range is in the order of 0.6 m at neaps, and 1.8 m at springs. All three sites are open to wave action and are ice-free during the experimental period of mid-July to mid-October. Study plots at each site were selected for different treatments from within the continuous stretch of oiled shoreline, with oiled buffer zones between plots and at either end of the oiled zone. Treatments included mixing (tilling), sediment relocation (surf washing) and bioremediation (nutrient enrichment). Measurements and observations were carried out during the summers of 1997 and 1998. The characteristics measured were: wave and wind conditions; beach topography and elevation; sediment grain size distribution; mineral fines size distribution and mineral composition; background hydrocarbons; concentration of oil within experimental plots and the rate of oil loss over time; depth of oil penetration and thickness of the oiled sediment layer; oil concentration and toxicity of near-shore benthic sediments; mineral composition of suspended particulate material captured in sub-tidal sediment traps; and oil-fines interaction in near-shore water samples. 1 fig.

  13. Experimental site and design

    Energy Technology Data Exchange (ETDEWEB)

    Guenette, C. C. [SINTEF Applied Cemistry, Trondheim (Norway)

    1999-07-01

    Design and site selection criteria for the Svalbard oil spill experiments are described. All three experimental sites have coarse and mixed sediment beaches of sand and pebble; within each site waveexposure is very similar; along-shore and across-shore sediment characteristics are also relatively homogeneous. Tidal range is in the order of 0.6 m at neaps, and 1.8 m at springs. All three sites are open to wave action and are ice-free during the experimental period of mid-July to mid-October. Study plots at each site were selected for different treatments from within the continuous stretch of oiled shoreline, with oiled buffer zones between plots and at either end of the oiled zone. Treatments included mixing (tilling), sediment relocation (surf washing) and bioremediation (nutrient enrichment). Measurements and observations were carried out during the summers of 1997 and 1998. The characteristics measured were: wave and wind conditions; beach topography and elevation; sediment grain size distribution; mineral fines size distribution and mineral composition; background hydrocarbons; concentration of oil within experimental plots and the rate of oil loss over time; depth of oil penetration and thickness of the oiled sediment layer; oil concentration and toxicity of near-shore benthic sediments; mineral composition of suspended particulate material captured in sub-tidal sediment traps; and oil-fines interaction in near-shore water samples. 1 fig.

  14. Removal of Mefenamic acid from aqueous solutions by oxidative process: Optimization through experimental design and HPLC/UV analysis.

    Science.gov (United States)

    Colombo, Renata; Ferreira, Tanare C R; Ferreira, Renato A; Lanza, Marcos R V

    2016-02-01

    Mefenamic acid (MEF) is a non-steroidal anti-inflammatory drug indicated for relief of mild to moderate pain, and for the treatment of primary dysmenorrhea. The presence of MEF in raw and sewage waters has been detected worldwide at concentrations exceeding the predicted no-effect concentration. In this study, using experimental designs, different oxidative processes (H2O2, H2O2/UV, fenton and Photo-fenton) were simultaneously evaluated for MEF degradation efficiency. The influence and interaction effects of the most important variables in the oxidative process (concentration and addition mode of hydrogen peroxide, concentration and type of catalyst, pH, reaction period and presence/absence of light) were investigated. The parameters were determined based on the maximum efficiency to save time and minimize the consumption of reagents. According to the results, the photo-Fenton process is the best procedure to remove the drug from water. A reaction mixture containing 1.005 mmol L(-1) of ferrioxalate and 17.5 mmol L(-1) of hydrogen peroxide, added at the initial reaction period, pH of 6.1 and 60 min of degradation indicated the most efficient degradation, promoting 95% of MEF removal. The development and validation of a rapid and efficient qualitative and quantitative HPLC/UV methodology for detecting this pollutant in aqueous solution is also reported. The method can be applied in water quality control that is generated and/or treated in municipal or industrial wastewater treatment plants. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Experimental Engineering: Articulating and Valuing Design Experimentation

    DEFF Research Database (Denmark)

    Vallgårda, Anna; Grönvall, Erik; Fritsch, Jonas

    2017-01-01

    In this paper we propose Experimental Engineering as a way to articulate open- ended technological experiments as a legitimate design research practice. Experimental Engineering introduces a move away from an outcome or result driven design process towards an interest in existing technologies and...

  16. Experimental design and optimization of leaching process for recovery of valuable chemical elements (U, La, V, Mo, Yb and Th) from low-grade uranium ore.

    Science.gov (United States)

    Zakrzewska-Koltuniewicz, Grażyna; Herdzik-Koniecko, Irena; Cojocaru, Corneliu; Chajduk, Ewelina

    2014-06-30

    The paper deals with experimental design and optimization of leaching process of uranium and associated metals from low-grade, Polish ores. The chemical elements of interest for extraction from the ore were U, La, V, Mo, Yb and Th. Sulphuric acid has been used as leaching reagent. Based on the design of experiments the second-order regression models have been constructed to approximate the leaching efficiency of elements. The graphical illustrations using 3-D surface plots have been employed in order to identify the main, quadratic and interaction effects of the factors. The multi-objective optimization method based on desirability approach has been applied in this study. The optimum condition have been determined as P=5 bar, T=120 °C and t=90 min. Under these optimal conditions, the overall extraction performance is 81.43% (for U), 64.24% (for La), 98.38% (for V), 43.69% (for Yb) and 76.89% (for Mo) and 97.00% (for Th). Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Optimization of lipid profile and hardness of low-fat mortadella following a sequential strategy of experimental design.

    Science.gov (United States)

    Saldaña, Erick; Siche, Raúl; da Silva Pinto, Jair Sebastião; de Almeida, Marcio Aurélio; Selani, Miriam Mabel; Rios-Mera, Juan; Contreras-Castillo, Carmen J

    2018-02-01

    This study aims to optimize simultaneously the lipid profile and instrumental hardness of low-fat mortadella. For lipid mixture optimization, the overlapping of surface boundaries was used to select the quantities of canola, olive, and fish oils, in order to maximize PUFAs, specifically the long-chain n-3 fatty acids (eicosapentaenoic-EPA, docosahexaenoic acids-DHA) using the minimum content of fish oil. Increased quantities of canola oil were associated with higher PUFA/SFA ratios. The presence of fish oil, even in small amounts, was effective in improving the nutritional quality of the mixture, showing lower n-6/n-3 ratios and significant levels of EPA and DHA. Thus, the optimal lipid mixture comprised of 20, 30 and 50% fish, olive and canola oils, respectively, which present PUFA/SFA (2.28) and n-6/n-3 (2.30) ratios within the recommendations of a healthy diet. Once the lipid mixture was optimized, components of the pre-emulsion used as fat replacer in the mortadella, such as lipid mixture (LM), sodium alginate (SA), and milk protein concentrate (PC), were studied to optimize hardness and springiness to target ranges of 13-16 N and 0.86-0.87, respectively. Results showed that springiness was not significantly affected by these variables. However, as the concentration of the three components increased, hardness decreased. Through the desirability function, the optimal proportions were 30% LM, 0.5% SA, and 0.5% PC. This study showed that the pre-emulsion decreases hardness of mortadella. In addition, response surface methodology was efficient to model lipid mixture and hardness, resulting in a product with improved texture and lipid quality.

  18. Conceptual optimal design of jackets

    DEFF Research Database (Denmark)

    Sandal, Kasper; Verbart, Alexander; Stolpe, Mathias

    Structural optimization can explore a large design space (400 jackets) in a short time (2 hours), and thus lead to better conceptual jacket designs.......Structural optimization can explore a large design space (400 jackets) in a short time (2 hours), and thus lead to better conceptual jacket designs....

  19. Numerical multi-criteria optimization methods for alloy design. Development of new high strength nickel-based superalloys and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Rettig, Ralf; Mueller, Alexander; Ritter, Nils C.; Singer, Robert F. [Institute of Science and Technology of Metals, Department of Materials Science and Engineering, University of Erlangen (Germany)

    2016-07-01

    A new approach for the design of optimum balanced metallic alloys is presented. It is based on a mathematical multi-criteria optimization method which uses different property models to predict the alloy behavior in dependency of composition. These property models are mostly based on computational thermodynamics (CALPHAD-method). The full composition range of the alloying elements can be considered using these models. In alloy design usually several contradicting goals have to be fulfilled. This is handled by the calculation of so-called Pareto-fronts. The aim of our approach is to guide the experimental research towards new alloy compositions that have a high probability of having very good properties. Consequently the number of required test alloys can be massively reduced. The approach will be demonstrated for the computer-aided design of a new Re-free superalloy with nearly identical creep strength as that of Re-containing superalloys. Our starting point for the design was to maintain the good properties of the gamma prime-phase in well-known alloys like CMSX-4 and to maximize the solid solution strengthening of W and Mo. The presented experimental measurements proof the excellent properties.

  20. Optimization of Forced Degradation Using Experimental Design and Development of a Stability-Indicating Liquid Chromatographic Assay Method for Rebamipide in Bulk and Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    Sandeep SONAWANE

    2016-09-01

    Full Text Available A novel stability-indicating RP-HPLC assay method was developed and validated for quantitative determination of rebamipide in bulk and tablet dosage form. Rebamipide (drug and drug product solutions were exposed to acid and alkali hydrolysis, thermal stress, oxidation by hydrogen peroxide and photodegradation. Experimental design has been used during forced degradation to determine significant factors responsible for degradation and to obtain optimal degradation conditions. In addition, acid and alkali hydrolysis was performed using a microwave oven. The chromatographic method employed the HiQ sil C-18HS (250 × 4.6 mm; 5 μm column with mobile phase consisting of 0.02 M potassium phosphate (pH adjusted to 6.8 and methanol (40:60, v/v and the detection was performed at 230 nm. The procedure was validated for specificity, linearity, accuracy, precision and robustness. There was no interference observed of excipients and degradation products in the determination of the active pharmaceutical ingredient. The method showed good accuracy and precision (intra and inter day and the response was linear in a range from 0.5 to 5 μg mL−1. The method was found to be simple and fast with less trial and error experimentation by making use of experimental design. Also, it proved that microwave energy can be used to expedite hydrolysis of rebamipide.

  1. Optimization of phase feeding of starter, grower, and finisher diets for male broilers by mixture experimental design: forty-eight-day production period.

    Science.gov (United States)

    Roush, W B; Boykin, D; Branton, S L

    2004-08-01

    A mixture experiment, a variant of response surface methodology, was designed to determine the proportion of time to feed broiler starter (23% protein), grower (20% protein), and finisher (18% protein) diets to optimize production and processing variables based on a total production time of 48 d. Mixture designs are useful for proportion problems where the components of the experiment (i.e., length of time the diets were fed) add up to a unity (48 d). The experiment was conducted with day-old male Ross x Ross broiler chicks. The birds were placed 50 birds per pen in each of 60 pens. The experimental design was a 10-point augmented simplex-centroid (ASC) design with 6 replicates of each point. Each design point represented the portion(s) of the 48 d that each of the diets was fed. Formulation of the diets was based on NRC standards. At 49 d, each pen of birds was evaluated for production data including BW, feed conversion, and cost of feed consumed. Then, 6 birds were randomly selected from each pen for processing data. Processing variables included live weight, hot carcass weight, dressing percentage, fat pad percentage, and breast yield (pectoralis major and pectoralis minor weights). Production and processing data were fit to simplex regression models. Model terms determined not to be significant (P > 0.05) were removed. The models were found to be statistically adequate for analysis of the response surfaces. A compromise solution was calculated based on optimal constraints designated for the production and processing data. The results indicated that broilers fed a starter and finisher diet for 30 and 18 d, respectively, would meet the production and processing constraints. Trace plots showed that the production and processing variables were not very sensitive to the grower diet.

  2. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO RAY MIXTURE.

    Science.gov (United States)

    Risk assessors are becoming increasingly aware of the importance of assessing interactions between chemicals in a mixture. Most traditional designs for evaluating interactions are prohibitive when the number of chemicals in the mixture is large. However, evaluation of interacti...

  3. Experimental design based response surface methodology optimization of ultrasonic assisted adsorption of safaranin O by tin sulfide nanoparticle loaded on activated carbon

    Science.gov (United States)

    Roosta, M.; Ghaedi, M.; Daneshfar, A.; Sahraei, R.

    2014-03-01

    In this research, the adsorption rate of safranine O (SO) onto tin sulfide nanoparticle loaded on activated carbon (SnS-NPAC) was accelerated by the ultrasound. SnS-NP-AC was characterized by different techniques such as SEM, XRD and UV-Vis measurements. The present results confirm that the ultrasound assisted adsorption method has remarkable ability to improve the adsorption efficiency. The influence of parameters such as the sonication time, adsorbent dosage, pH and initial SO concentration was examined and evaluated by central composite design (CCD) combined with response surface methodology (RSM) and desirability function (DF). Conducting adsorption experiments at optimal conditions set as 4 min of sonication time, 0.024 g of adsorbent, pH 7 and 18 mg L-1 SO make admit to achieve high removal percentage (98%) and high adsorption capacity (50.25 mg g-1). A good agreement between experimental and predicted data in this study was observed. The experimental equilibrium data fitting to Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show that the Langmuir model is a good and suitable model for evaluation and the actual behavior of adsorption. Kinetic evaluation of experimental data showed that the adsorption processes followed well pseudo-second-order and intraparticle diffusion models.

  4. D-Optimal mixture experimental design for stealth biodegradable crosslinked docetaxel-loaded poly-ε-caprolactone nanoparticles manufactured by dispersion polymerization.

    Science.gov (United States)

    Ogunwuyi, O; Adesina, S; Akala, E O

    2015-03-01

    We report here our efforts on the development of stealth biodegradable crosslinked poly-ε-caprolactone nanoparticles by free radical dispersion polymerization suitable for the delivery of bioactive agents. The uniqueness of the dispersion polymerization technique is that it is surfactant free, thereby obviating the problems known to be associated with the use of surfactants in the fabrication of nanoparticles for biomedical applications. Aided by a statistical software for experimental design and analysis, we used D-optimal mixture statistical experimental design to generate thirty batches of nanoparticles prepared by varying the proportion of the components (poly-ε-caprolactone macromonomer, crosslinker, initiators and stabilizer) in acetone/water system. Morphology of the nanoparticles was examined using scanning electron microscopy (SEM). Particle size and zeta potential were measured by dynamic light scattering (DLS). Scheffe polynomial models were generated to predict particle size (nm) and particle surface zeta potential (mV) as functions of the proportion of the components. Solutions were returned from simultaneous optimization of the response variables for component combinations to (a) minimize nanoparticle size (small nanoparticles are internalized into disease organs easily, avoid reticuloendothelial clearance and lung filtration) and (b) maximization of the negative zeta potential values, as it is known that, following injection into the blood stream, nanoparticles with a positive zeta potential pose a threat of causing transient embolism and rapid clearance compared to negatively charged particles. In vitro availability isotherms show that the nanoparticles sustained the release of docetaxel for 72 to 120 hours depending on the formulation. The data show that nanotechnology platforms for controlled delivery of bioactive agents can be developed based on the nanoparticles.

  5. Pathway Design, Engineering, and Optimization.

    Science.gov (United States)

    Garcia-Ruiz, Eva; HamediRad, Mohammad; Zhao, Huimin

    The microbial metabolic versatility found in nature has inspired scientists to create microorganisms capable of producing value-added compounds. Many endeavors have been made to transfer and/or combine pathways, existing or even engineered enzymes with new function to tractable microorganisms to generate new metabolic routes for drug, biofuel, and specialty chemical production. However, the success of these pathways can be impeded by different complications from an inherent failure of the pathway to cell perturbations. Pursuing ways to overcome these shortcomings, a wide variety of strategies have been developed. This chapter will review the computational algorithms and experimental tools used to design efficient metabolic routes, and construct and optimize biochemical pathways to produce chemicals of high interest.

  6. Optimal Hospital Layout Design

    DEFF Research Database (Denmark)

    Holst, Malene Kirstine

    foundation. The basis of the present study lies in solving the architectural design problem in order to respond to functionalities and performances. The emphasis is the practical applicability for architects, engineers and hospital planners for assuring usability and a holistic approach of functionalities...... a correlation matrix. The correlation factor defines the framework for conceptual design, whereby the design considers functionalities and their requirements and preferences. It facilitates implementation of evidence-based design as it is prepared for ongoing update and it is based on actual data. Hence......, this contribution is a model for hospital design, where design derives as a response to the defined variables, requirements and preferences....

  7. Optimization of ultrasound-assisted dispersive solid-phase microextraction based on nanoparticles followed by spectrophotometry for the simultaneous determination of dyes using experimental design.

    Science.gov (United States)

    Asfaram, Arash; Ghaedi, Mehrorang; Goudarzi, Alireza

    2016-09-01

    A simple, low cost and ultrasensitive method for the simultaneous preconcentration and determination of trace amount of auramine-O and malachite green in aqueous media following accumulation on novel and lower toxicity nanomaterials by ultrasound-assisted dispersive solid phase micro-extraction (UA-DSPME) procedure combined with spectrophotometric has been described. The Mn doped ZnS nanoparticles loaded on activated carbon were characterized by Field emission scanning electron microscopy (FE-SEM), particle size distribution, X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR) analyses and subsequently were used as green and efficient material for dyes accumulation. Contribution of experimental variables such as ultrasonic time, ultrasonic temperature, adsorbent mass, vortex time, ionic strength, pH and elution volume were optimized through experimental design, and while the preconcentrated analytes were efficiently eluted by acetone. Preliminary Plackett-Burman design was applied for selection of most significant factors and giving useful information about their main and interaction part of significant variables like ultrasonic time, adsorbent mass, elution volume and pH were obtained by central composite design combined with response surface analysis and optimum experimental conditions was set at pH of 8.0, 1.2mg of adsorbent, 150μL eluent and 3.7min sonication. Under optimized conditions, the average recoveries (five replicates) for two dyes (spiked at 500.0ngmL(-1)) changes in the range of 92.80-97.70% with acceptable RSD% less than 4.0% over a linear range of 3.0-5000.0ngmL(-1) for the AO and MG in water samples with regression coefficients (R(2)) of 0.9975 and 0.9977, respectively. Acceptable limits of detection of 0.91 and 0.61ngmL(-1) for AO and MG, respectively and high accuracy and repeatability are unique advantages of present method to improve the figures of merit for their accurate determination at trace level in complicated

  8. Experimental validation of a topology optimized acoustic cavity

    DEFF Research Database (Denmark)

    Christiansen, Rasmus Ellebæk; Sigmund, Ole; Fernandez Grande, Efren

    2015-01-01

    This paper presents the experimental validation of an acoustic cavity designed using topology optimization with the goal of minimizing the sound pressure locally for monochromatic excitation. The presented results show good agreement between simulations and measurements. The effect of damping...

  9. Optimization of low-frequency low-intensity ultrasound-mediated microvessel disruption on prostate cancer xenografts in nude mice using an orthogonal experimental design.

    Science.gov (United States)

    Yang, Y U; Bai, Wenkun; Chen, Yini; Lin, Yanduan; Hu, Bing

    2015-11-01

    The present study aimed to provide a complete exploration of the effect of sound intensity, frequency, duty cycle, microbubble volume and irradiation time on low-frequency low-intensity ultrasound (US)-mediated microvessel disruption, and to identify an optimal combination of the five factors that maximize the blockage effect. An orthogonal experimental design approach was used. Enhanced US imaging and acoustic quantification were performed to assess tumor blood perfusion. In the confirmatory test, in addition to acoustic quantification, the specimens of the tumor were stained with hematoxylin and eosin and observed using light microscopy. The results revealed that sound intensity, frequency, duty cycle, microbubble volume and irradiation time had a significant effect on the average peak intensity (API). The extent of the impact of the variables on the API was in the following order: Sound intensity; frequency; duty cycle; microbubble volume; and irradiation time. The optimum conditions were found to be as follows: Sound intensity, 1.00 W/cm 2 ; frequency, 20 Hz; duty cycle, 40%; microbubble volume, 0.20 ml; and irradiation time, 3 min. In the confirmatory test, the API was 19.97±2.66 immediately subsequent to treatment, and histological examination revealed signs of tumor blood vessel injury in the optimum parameter combination group. In conclusion, the Taguchi L 18 (3) 6 orthogonal array design was successfully applied for determining the optimal parameter combination of API following treatment. Under the optimum orthogonal design condition, a minimum API of 19.97±2.66 subsequent to low-frequency and low-intensity mediated blood perfusion blockage was obtained.

  10. Spray-drying nanocapsules in presence of colloidal silica as drying auxiliary agent: formulation and process variables optimization using experimental designs.

    Science.gov (United States)

    Tewa-Tagne, Patrice; Degobert, Ghania; Briançon, Stéphanie; Bordes, Claire; Gauvrit, Jean-Yves; Lanteri, Pierre; Fessi, Hatem

    2007-04-01

    Spray-drying process was used for the development of dried polymeric nanocapsules. The purpose of this research was to investigate the effects of formulation and process variables on the resulting powder characteristics in order to optimize them. Experimental designs were used in order to estimate the influence of formulation parameters (nanocapsules and silica concentrations) and process variables (inlet temperature, spray-flow air, feed flow rate and drying air flow rate) on spray-dried nanocapsules when using silica as drying auxiliary agent. The interactions among the formulation parameters and process variables were also studied. Responses analyzed for computing these effects and interactions were outlet temperature, moisture content, operation yield, particles size, and particulate density. Additional qualitative responses (particles morphology, powder behavior) were also considered. Nanocapsules and silica concentrations were the main factors influencing the yield, particulate density and particle size. In addition, they were concerned for the only significant interactions occurring among two different variables. None of the studied variables had major effect on the moisture content while the interaction between nanocapsules and silica in the feed was of first interest and determinant for both the qualitative and quantitative responses. The particles morphology depended on the feed formulation but was unaffected by the process conditions. This study demonstrated that drying nanocapsules using silica as auxiliary agent by spray drying process enables the obtaining of dried micronic particle size. The optimization of the process and the formulation variables resulted in a considerable improvement of product yield while minimizing the moisture content.

  11. Parametric Optimization of Hospital Design

    DEFF Research Database (Denmark)

    Holst, Malene Kirstine; Kirkegaard, Poul Henning; Christoffersen, L.D.

    2013-01-01

    Present paper presents a parametric performancebased design model for optimizing hospital design. The design model operates with geometric input parameters defining the functional requirements of the hospital and input parameters in terms of performance objectives defining the design requirements...... and preferences of the hospital with respect to performances. The design model takes point of departure in the hospital functionalities as a set of defined parameters and rules describing the design requirements and preferences....

  12. Experimental determination of the heat transfer coefficient for the optimal design of the cooling system of a PEM fuel cell placed inside the fuselage of an UAV

    International Nuclear Information System (INIS)

    Barroso, Jorge; Renau, Jordi; Lozano, Antonio; Miralles, José; Martín, Jesús; Sánchez, Fernando; Barreras, Félix

    2015-01-01

    The objective of this research is to calculate the heat transfer coefficients needed for the further design of the optimal cooling system of a high-temperature polymer electrolyte membrane fuel cell (HT-PEMFC) stack that will be incorporated to the powerplant of a light unmanned aerial vehicle (UAV) capable of reaching an altitude of 10,000 m. Experiments are performed in two rectangular tunnels, for three different form factors, in experimental conditions as close as possible to the actual ones in the HT-PEMFC stack. For the calculations, all the relevant thermal processes are considered (i.e., convection and radiation). Different parameters are measured, such as air mass flow rate, inlet and outlet air temperatures, and wall temperatures for bipolar plates and endplates. Different numerical models are fitted revealing the influence of the diverse relevant non-dimensional groups on the Nusselt number. Heat transfer coefficients calculated for the air cooling flow vary from 8 to 44 W m"−"2 K"−"1. Results obtained at sea level are extrapolated for a flight ceiling of 10 km. The flow section is optimized as a function of the power required to cool the stack down to the temperature recommended by the membrane-electrode assembly (MEA) manufacturer using a numerical code specifically developed for this purpose. - Highlights: • Heat transfer coefficients to refrigerate a HT-PEMFC stack are calculated. • Experiments are performed in 2 wind tunnels, for 3 form factors and real conditions. • The calculated heat transfer coefficient varies from 8 to 44 W m"−"2 K"−"1. • Results at sea level are suitably extrapolated for a target altitude of 10 km. • Flow area is optimized as a function of the power required to cool the stack down.

  13. Optimized rectenna design

    NARCIS (Netherlands)

    Visser, H.J.; Keyrouz, S.; Smolders, A.B.

    2015-01-01

    Design steps are outlined for maximizing the RF-to-dc power conversion efficiency (PCE) of a rectenna. It turns out that at a frequency of 868 MHz, a high-ohmic loaded rectifier will lead to a highly sensitive and power conversion efficient rectenna. It is demonstrated that a rectenna thus designed,

  14. Experimental design and process optimization

    CERN Document Server

    Rodrigues, Maria Isabel; Dos Santos, Elian Luiz

    2014-01-01

    Initial ConsiderationsTopics of Elementary StatisticsIntroductory NotionsGeneral IdeasVariablesPopulations and Samples Importance of the Form of the PopulationFirst Ideas of Interference on a Normal PopulationParameters and EstimatesNotions on Testing HypothesesInference of the Mean of a Normal PopulationInference of the Variance of a Normal PopulationInference of the Means of Two Normal PopulationsIndependent SamplesPaired Samples L

  15. Optimal Network-Topology Design

    Science.gov (United States)

    Li, Victor O. K.; Yuen, Joseph H.; Hou, Ting-Chao; Lam, Yuen Fung

    1987-01-01

    Candidate network designs tested for acceptability and cost. Optimal Network Topology Design computer program developed as part of study on topology design and analysis of performance of Space Station Information System (SSIS) network. Uses efficient algorithm to generate candidate network designs consisting of subsets of set of all network components, in increasing order of total costs and checks each design to see whether it forms acceptable network. Technique gives true cost-optimal network and particularly useful when network has many constraints and not too many components. Program written in PASCAL.

  16. Taguchi Experimental Design for Optimization of Recombinant Human Growth Hormone Production in CHO Cell Lines and Comparing its Biological Activity with Prokaryotic Growth Hormone.

    Science.gov (United States)

    Aghili, Zahra Sadat; Zarkesh-Esfahani, Sayyed Hamid

    2018-02-01

    Growth hormone deficiency results in growth retardation in children and the GH deficiency syndrome in adults and they need to receive recombinant-GH in order to rectify the GH deficiency symptoms. Mammalian cells have become the favorite system for production of recombinant proteins for clinical application compared to prokaryotic systems because of their capability for appropriate protein folding, assembly, post-translational modification and proper signal. However, production level in mammalian cells is generally low compared to prokaryotic hosts. Taguchi has established orthogonal arrays to describe a large number of experimental situations mainly to reduce experimental errors and to enhance the efficiency and reproducibility of laboratory experiments.In the present study, rhGH was produced in CHO cells and production of rhGH was assessed using Dot blotting, western blotting and Elisa assay. For optimization of rhGH production in CHO cells using Taguchi method An M16 orthogonal experimental design was used to investigate four different culture components. The biological activity of rhGH was assessed using LHRE-TK-Luciferase reporter gene system in HEK-293 and compared to the biological activity of prokaryotic rhGH.A maximal productivity of rhGH was reached in the conditions of 1%DMSO, 1%glycerol, 25 µM ZnSO 4 and 0 mM NaBu. Our findings indicate that control of culture conditions such as the addition of chemical components helps to develop an efficient large-scale and industrial process for the production of rhGH in CHO cells. Results of bioassay indicated that rhGH produced by CHO cells is able to induce GH-mediated intracellular cell signaling and showed higher bioactivity when compared to prokaryotic GH at the same concentrations. © Georg Thieme Verlag KG Stuttgart · New York.

  17. Divertor design through shape optimization

    International Nuclear Information System (INIS)

    Dekeyser, W.; Baelmans, M.; Reiter, D.

    2012-01-01

    Due to the conflicting requirements, complex physical processes and large number of design variables, divertor design for next step fusion reactors is a challenging problem, often relying on large numbers of computationally expensive numerical simulations. In this paper, we attempt to partially automate the design process by solving an appropriate shape optimization problem. Design requirements are incorporated in a cost functional which measures the performance of a certain design. By means of changes in the divertor shape, which in turn lead to changes in the plasma state, this cost functional can be minimized. Using advanced adjoint methods, optimal solutions are computed very efficiently. The approach is illustrated by designing divertor targets for optimal power load spreading, using a simplified edge plasma model (copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  18. Optimal covariate designs theory and applications

    CERN Document Server

    Das, Premadhis; Mandal, Nripes Kumar; Sinha, Bikas Kumar

    2015-01-01

    This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for...

  19. Optimal Design of Porous Materials

    DEFF Research Database (Denmark)

    Andreassen, Erik

    The focus of this thesis is topology optimization of material microstructures. That is, creating new materials, with attractive properties, by combining classic materials in periodic patterns. First, large-scale topology optimization is used to design complicated three-dimensional materials......, throughout the thesis extra attention is given to obtain structures that can be manufactured. That is also the case in the final part, where a simple multiscale method for the optimization of structural damping is presented. The method can be used to obtain an optimized component with structural details...

  20. WE-AB-207A-09: Optimization of the Design of a Moving Blocker for Cone-Beam CT Scatter Correction: Experimental Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X; Ouyang, L; Jia, X; Zhang, Y; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States); Yan, H [Cyber Medical Corporation, Xi’an (China)

    2016-06-15

    Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different geometry designs and moving speeds of the blocker affect its performance in image reconstruction accuracy. The goal of this work is to optimize the geometric design and moving speed of the moving blocker system through experimental evaluations. Methods: An Elekta Synergy XVI system and an anthropomorphic pelvis phantom CIRS 801-P were used for our experiment. A blocker consisting of lead strips was inserted between the x-ray source and the phantom moving back and forth along rotation axis to measure the scatter signal. Accoriding to our Monte Carlo simulation results, three blockers were used, which have the same lead strip width 3.2mm and different gap between neighboring lead strips, 3.2, 6.4 and 9.6mm. For each blocker, three moving speeds were evaluated, 10, 20 and 30 pixels per projection (on the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline based interpolation from the blocked region. CBCT image was reconstructed by a total variation (TV) based algebraic iterative reconstruction (ART) algorithm from the partially blocked projection data. Reconstruction accuracy in each condition is quantified as CT number error of region of interest (ROI) by comparing to a CBCT reconstructed image from analytically simulated unblocked and scatter free projection data. Results: Highest reconstruction accuracy is achieved when the blocker width is 3.2 mm, the gap between neighboring lead strips is 9.6 mm and the moving speed is 20 pixels per projection. RMSE of the CT number of ROIs can be reduced from 436 to 27. Conclusions: Image reconstruction accuracy is greatly affected by the geometry design of the blocker. The moving speed does not have a very strong effect on reconstruction result if it is over 20 pixels per projection.

  1. Optimization of ejector design and operation

    Directory of Open Access Journals (Sweden)

    Kuzmenko Konstantin

    2016-01-01

    Full Text Available The investigation aims at optimization of gas ejector operation. The goal consists in the improvement of the inflator design so that to enable 50 liters of gas inflation within ~30 milliseconds. For that, an experimental facility was developed and fabricated together with the measurement system to study pressure patterns in the inflator path.

  2. Optimization of Experimental Conditions of the Pulsed Current GTAW Parameters for Mechanical Properties of SDSS UNS S32760 Welds Based on the Taguchi Design Method

    Science.gov (United States)

    Yousefieh, M.; Shamanian, M.; Saatchi, A.

    2012-09-01

    Taguchi design method with L9 orthogonal array was implemented to optimize the pulsed current gas tungsten arc welding parameters for the hardness and the toughness of super duplex stainless steel (SDSS, UNS S32760) welds. In this regard, the hardness and the toughness were considered as performance characteristics. Pulse current, background current, % on time, and pulse frequency were chosen as main parameters. Each parameter was varied at three different levels. As a result of pooled analysis of variance, the pulse current is found to be the most significant factor for both the hardness and the toughness of SDSS welds by percentage contribution of 71.81 for hardness and 78.18 for toughness. The % on time (21.99%) and the background current (17.81%) had also the next most significant effect on the hardness and the toughness, respectively. The optimum conditions within the selected parameter values for hardness were found as the first level of pulse current (100 A), third level of background current (70 A), first level of % on time (40%), and first level of pulse frequency (1 Hz), while they were found as the second level of pulse current (120 A), second level of background current (60 A), second level of % on time (60%), and third level of pulse frequency (5 Hz) for toughness. The Taguchi method was found to be a promising tool to obtain the optimum conditions for such studies. Finally, in order to verify experimental results, confirmation tests were carried out at optimum working conditions. Under these conditions, there were good agreements between the predicted and the experimental results for the both hardness and toughness.

  3. Experimental design of a waste glass study

    International Nuclear Information System (INIS)

    Piepel, G.F.; Redgate, P.E.; Hrma, P.

    1995-04-01

    A Composition Variation Study (CVS) is being performed to support a future high-level waste glass plant at Hanford. A total of 147 glasses, covering a broad region of compositions melting at approximately 1150 degrees C, were tested in five statistically designed experimental phases. This paper focuses on the goals, strategies, and techniques used in designing the five phases. The overall strategy was to investigate glass compositions on the boundary and interior of an experimental region defined by single- component, multiple-component, and property constraints. Statistical optimal experimental design techniques were used to cover various subregions of the experimental region in each phase. Empirical mixture models for glass properties (as functions of glass composition) from previous phases wee used in designing subsequent CVS phases

  4. Optimality-theoretic pragmatics meets experimental pragmatics

    NARCIS (Netherlands)

    Blutner, R.; Benz, A.; Blutner, R.

    2009-01-01

    The main concern of this article is to discuss some recent findings concerning the psychological reality of optimality-theoretic pragmatics and its central part - bidirectional optimization. A present challenge is to close the gap between experimental pragmatics and neo-Gricean theories of

  5. Optimality models in the age of experimental evolution and genomics

    OpenAIRE

    Bull, J. J.; Wang, I.-N.

    2010-01-01

    Optimality models have been used to predict evolution of many properties of organisms. They typically neglect genetic details, whether by necessity or design. This omission is a common source of criticism, and although this limitation of optimality is widely acknowledged, it has mostly been defended rather than evaluated for its impact. Experimental adaptation of model organisms provides a new arena for testing optimality models and for simultaneously integrating genetics. First, an experimen...

  6. Cost Optimal System Identification Experiment Design

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    A structural system identification experiment design method is formulated in the light of decision theory, structural reliability theory and optimization theory. The experiment design is based on a preposterior analysis, well-known from the classical decision theory. I.e. the decisions concerning...... reflecting the cost of the experiment and the value of obtained additional information. An example concerning design of an experiment for parametric identification of a single degree of freedom structural system shows the applicability of the experiment design method....... the experiment design are not based on obtained experimental data. Instead the decisions are based on the expected experimental data assumed to be obtained from the measurements, estimated based on prior information and engineering judgement. The design method provides a system identification experiment design...

  7. Evolutionary optimization of an experimental apparatus

    DEFF Research Database (Denmark)

    Geisel, Ilka; Cordes, Kai; Mahnke, Jan

    2013-01-01

    algorithm based on differential evolution. We demonstrate that this algorithm optimizes 21 correlated parameters and that it is robust against local maxima and experimental noise. The algorithm is flexible and easy to implement. Thus, the presented scheme can be applied to a wide range of experimental...

  8. Acoustic design by topology optimization

    DEFF Research Database (Denmark)

    Dühring, Maria Bayard; Jensen, Jakob Søndergaard; Sigmund, Ole

    2008-01-01

    To bring down noise levels in human surroundings is an important issue and a method to reduce noise by means of topology optimization is presented here. The acoustic field is modeled by Helmholtz equation and the topology optimization method is based on continuous material interpolation functions...... in the density and bulk modulus. The objective function is the squared sound pressure amplitude. First, room acoustic problems are considered and it is shown that the sound level can be reduced in a certain part of the room by an optimized distribution of reflecting material in a design domain along the ceiling...

  9. Experimental design a chemometric approach

    CERN Document Server

    Deming, SN

    1987-01-01

    Now available in a paperback edition is a book which has been described as ``...an exceptionally lucid, easy-to-read presentation... would be an excellent addition to the collection of every analytical chemist. I recommend it with great enthusiasm.'' (Analytical Chemistry). Unlike most current textbooks, it approaches experimental design from the point of view of the experimenter, rather than that of the statistician. As the reviewer in `Analytical Chemistry' went on to say: ``Deming and Morgan should be given high praise for bringing the principles of experimental design to the level of the p

  10. Experimental Design: Review and Comment.

    Science.gov (United States)

    1984-02-01

    creativity. Innovative modifications and extensions of classical experimental designs were developed and many useful articles were published in a short...Pjrazolone Industrielle ," Bulletin de la Soci~t6 Chimique de France, 11-12, 1171-1174. LI, K. C. (1983), "Minimaxity for Randomized Designs: Some

  11. Optimal design of condenser weight

    International Nuclear Information System (INIS)

    Zheng Jing; Yan Changqi; Wang Jianjun

    2011-01-01

    The condenser is an important component in nuclear power plants, which dimension and weight will effect the economical performance and the arrangement of the nuclear power plants. In this paper, the calculation model is established according to the design experience. The corresponding codes are also developed, and the sensitivity of design parameters which influence the condenser weight is analyzed. The present design optimization of the condenser, taking the weight minimization as the objective, is carried out with the self-developed complex-genetic algorithm. The results show that the reference condenser design is far from the best scheme, and also verify the feasibility of the complex-genetic algorithm. (authors)

  12. Optimization methods in structural design

    CERN Document Server

    Rothwell, Alan

    2017-01-01

    This book offers an introduction to numerical optimization methods in structural design. Employing a readily accessible and compact format, the book presents an overview of optimization methods, and equips readers to properly set up optimization problems and interpret the results. A ‘how-to-do-it’ approach is followed throughout, with less emphasis at this stage on mathematical derivations. The book features spreadsheet programs provided in Microsoft Excel, which allow readers to experience optimization ‘hands-on.’ Examples covered include truss structures, columns, beams, reinforced shell structures, stiffened panels and composite laminates. For the last three, a review of relevant analysis methods is included. Exercises, with solutions where appropriate, are also included with each chapter. The book offers a valuable resource for engineering students at the upper undergraduate and postgraduate level, as well as others in the industry and elsewhere who are new to these highly practical techniques.Whi...

  13. Telemanipulator design and optimization software

    Science.gov (United States)

    Cote, Jean; Pelletier, Michel

    1995-12-01

    For many years, industrial robots have been used to execute specific repetitive tasks. In those cases, the optimal configuration and location of the manipulator only has to be found once. The optimal configuration or position where often found empirically according to the tasks to be performed. In telemanipulation, the nature of the tasks to be executed is much wider and can be very demanding in terms of dexterity and workspace. The position/orientation of the robot's base could be required to move during the execution of a task. At present, the choice of the initial position of the teleoperator is usually found empirically which can be sufficient in the case of an easy or repetitive task. In the converse situation, the amount of time wasted to move the teleoperator support platform has to be taken into account during the execution of the task. Automatic optimization of the position/orientation of the platform or a better designed robot configuration could minimize these movements and save time. This paper will present two algorithms. The first algorithm is used to optimize the position and orientation of a given manipulator (or manipulators) with respect to the environment on which a task has to be executed. The second algorithm is used to optimize the position or the kinematic configuration of a robot. For this purpose, the tasks to be executed are digitized using a position/orientation measurement system and a compact representation based on special octrees. Given a digitized task, the optimal position or Denavit-Hartenberg configuration of the manipulator can be obtained numerically. Constraints on the robot design can also be taken into account. A graphical interface has been designed to facilitate the use of the two optimization algorithms.

  14. Experimental validation of additively manufactured optimized shapes for passive cooling

    DEFF Research Database (Denmark)

    Lazarov, Boyan S.; Sigmund, Ole; Meyer, Knud E.

    2018-01-01

    This article confirms the superior performance of topology optimized heat sinks compared to lattice designs and suggests simpler manufacturable pin-fin design interpretations. The development is driven by the wide adoption of light-emitting-diode (LED) lamps for industrial and residential lighting....... The presented heat sink solutions are generated by topology optimization, a computational morphogenesis approach with ultimate design freedom, relying on high-performance computing and simulation. Optimized devices exhibit complex and organic-looking topologies which are realized with the help of additive...... manufacturing. To reduce manufacturing cost, a simplified interpretation of the optimized design is produced and validated as well. Numerical and experimental results agree well and indicate that the obtained designs outperform lattice geometries by more than 21%, resulting in a doubling of life expectancy and...

  15. Parameters optimization using experimental design for headspace solid phase micro-extraction analysis of short-chain chlorinated paraffins in waters under the European water framework directive.

    Science.gov (United States)

    Gandolfi, F; Malleret, L; Sergent, M; Doumenq, P

    2015-08-07

    The water framework directives (WFD 2000/60/EC and 2013/39/EU) force European countries to monitor the quality of their aquatic environment. Among the priority hazardous substances targeted by the WFD, short chain chlorinated paraffins C10-C13 (SCCPs), still represent an analytical challenge, because few laboratories are nowadays able to analyze them. Moreover, an annual average quality standards as low as 0.4μgL(-1) was set for SCCPs in surface water. Therefore, to test for compliance, the implementation of sensitive and reliable analysis method of SCCPs in water are required. The aim of this work was to address this issue by evaluating automated solid phase micro-extraction (SPME) combined on line with gas chromatography-electron capture negative ionization mass spectrometry (GC/ECNI-MS). Fiber polymer, extraction mode, ionic strength, extraction temperature and time were the most significant thermodynamic and kinetic parameters studied. To determine the suitable factors working ranges, the study of the extraction conditions was first carried out by using a classical one factor-at-a-time approach. Then a mixed level factorial 3×2(3) design was performed, in order to give rise to the most influent parameters and to estimate potential interactions effects between them. The most influent factors, i.e. extraction temperature and duration, were optimized by using a second experimental design, in order to maximize the chromatographic response. At the close of the study, a method involving headspace SPME (HS-SPME) coupled to GC/ECNI-MS is proposed. The optimum extraction conditions were sample temperature 90°C, extraction time 80min, with the PDMS 100μm fiber and desorption at 250°C during 2min. Linear response from 0.2ngmL(-1) to 10ngmL(-1) with r(2)=0.99 and limits of detection and quantification, respectively of 4pgmL(-1) and 120pgmL(-1) in MilliQ water, were achieved. The method proved to be applicable in different types of waters and show key advantages, such

  16. Experimental design in chemistry: A tutorial.

    Science.gov (United States)

    Leardi, Riccardo

    2009-10-12

    In this tutorial the main concepts and applications of experimental design in chemistry will be explained. Unfortunately, nowadays experimental design is not as known and applied as it should be, and many papers can be found in which the "optimization" of a procedure is performed one variable at a time. Goal of this paper is to show the real advantages in terms of reduced experimental effort and of increased quality of information that can be obtained if this approach is followed. To do that, three real examples will be shown. Rather than on the mathematical aspects, this paper will focus on the mental attitude required by experimental design. The readers being interested to deepen their knowledge of the mathematical and algorithmical part can find very good books and tutorials in the references [G.E.P. Box, W.G. Hunter, J.S. Hunter, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, John Wiley & Sons, New York, 1978; R. Brereton, Chemometrics: Data Analysis for the Laboratory and Chemical Plant, John Wiley & Sons, New York, 1978; R. Carlson, J.E. Carlson, Design and Optimization in Organic Synthesis: Second Revised and Enlarged Edition, in: Data Handling in Science and Technology, vol. 24, Elsevier, Amsterdam, 2005; J.A. Cornell, Experiments with Mixtures: Designs, Models and the Analysis of Mixture Data, in: Series in Probability and Statistics, John Wiley & Sons, New York, 1991; R.E. Bruns, I.S. Scarminio, B. de Barros Neto, Statistical Design-Chemometrics, in: Data Handling in Science and Technology, vol. 25, Elsevier, Amsterdam, 2006; D.C. Montgomery, Design and Analysis of Experiments, 7th edition, John Wiley & Sons, Inc., 2009; T. Lundstedt, E. Seifert, L. Abramo, B. Thelin, A. Nyström, J. Pettersen, R. Bergman, Chemolab 42 (1998) 3; Y. Vander Heyden, LC-GC Europe 19 (9) (2006) 469].

  17. Experimental design for the optimization of the extraction conditions of polycyclic aromatic hydrocarbons in milk with a novel diethoxydiphenylsilane solid-phase microextraction fiber.

    Science.gov (United States)

    Bianchi, F; Careri, M; Mangia, A; Mattarozzi, M; Musci, M

    2008-07-04

    An innovative solid-phase microextraction coating based on the use of diethoxydiphenylsilane synthesized by sol-gel technology was used for the determination of polycyclic aromatic hydrocarbons at trace levels in milk. The effects of time and temperature of extraction and acetone addition were investigated by experimental design. Regression models and desirability functions were applied to find the experimental conditions providing the highest global extraction response. The capabilities of the developed fiber were proved obtaining limit of quantitation values in the low microg/l range, enabling the direct analysis of complex matrices like milk and a complete desorption of high-boiling compounds without carryover effects.

  18. Formulation optimization of transdermal meloxicam potassium-loaded mesomorphic phases containing ethanol, oleic acid and mixture surfactant using the statistical experimental design methodology.

    Science.gov (United States)

    Huang, Chi-Te; Tsai, Chia-Hsun; Tsou, Hsin-Yeh; Huang, Yaw-Bin; Tsai, Yi-Hung; Wu, Pao-Chu

    2011-01-01

    Response surface methodology (RSM) was used to develop and optimize the mesomorphic phase formulation for a meloxicam transdermal dosage form. A mixture design was applied to prepare formulations which consisted of three independent variables including oleic acid (X(1)), distilled water (X(2)) and ethanol (X(3)). The flux and lag time (LT) were selected as dependent variables. The result showed that using mesomorphic phases as vehicles can significantly increase flux and shorten LT of drug. The analysis of variance showed that the permeation parameters of meloxicam from formulations were significantly influenced by the independent variables and their interactions. The X(3) (ethanol) had the greatest potential influence on the flux and LT, followed by X(1) and X(2). A new formulation was prepared according to the independent levels provided by RSM. The observed responses were in close agreement with the predicted values, demonstrating that RSM could be successfully used to optimize mesomorphic phase formulations.

  19. Application of experimental design in examination of the dissolution rate of carbamazepine from formulations: Characterization of the optimal formulation by DSC, TGA, FT-IR and PXRD analysis

    OpenAIRE

    Krstić Marko; Ražić Slavica; Vasiljević Dragana; Spasojević Đurđija; Ibrić Svetlana

    2015-01-01

    Poor solubility is one of the key reasons for the poor bioavailability of these drugs. This paper displays a formulation of a solid surfactant system with carbamazepine, in order to increase its dissolution rate. Solid state surfactant systems are formed by application of fractal experimental design. Poloxamer 237 and Poloxamer 338 were used as surfactants and Brij® 35 was used as the co-surfactant. The ratios of the excipients and carbamazepine were varied...

  20. Development of a novel pH sensor based upon Janus Green B immobilized on triacetyl cellulose membrane: Experimental design and optimization

    Science.gov (United States)

    Chamkouri, Narges; Niazi, Ali; Zare-Shahabadi, Vali

    2016-03-01

    A novel pH optical sensor was prepared by immobilizing an azo dye called Janus Green B on the triacetylcellulose membrane. Condition of the dye solution used in the immobilization step, including concentration of the dye, pH, and duration were considered and optimized using the Box-Behnken design. The proposed sensor showed good behavior and precision (RSD < 5%) in the pH range of 2.0-10.0. Advantages of this optical sensor include on-line applicability, no leakage, long-term stability (more than 6 months), fast response time (less than 1 min), high selectivity and sensitivity as well as good reversibility and reproducibility.

  1. Computational Optimization of a Natural Laminar Flow Experimental Wing Glove

    Science.gov (United States)

    Hartshom, Fletcher

    2012-01-01

    Computational optimization of a natural laminar flow experimental wing glove that is mounted on a business jet is presented and discussed. The process of designing a laminar flow wing glove starts with creating a two-dimensional optimized airfoil and then lofting it into a three-dimensional wing glove section. The airfoil design process does not consider the three dimensional flow effects such as cross flow due wing sweep as well as engine and body interference. Therefore, once an initial glove geometry is created from the airfoil, the three dimensional wing glove has to be optimized to ensure that the desired extent of laminar flow is maintained over the entire glove. TRANAIR, a non-linear full potential solver with a coupled boundary layer code was used as the main tool in the design and optimization process of the three-dimensional glove shape. The optimization process uses the Class-Shape-Transformation method to perturb the geometry with geometric constraints that allow for a 2-in clearance from the main wing. The three-dimensional glove shape was optimized with the objective of having a spanwise uniform pressure distribution that matches the optimized two-dimensional pressure distribution as closely as possible. Results show that with the appropriate inputs, the optimizer is able to match the two dimensional pressure distributions practically across the entire span of the wing glove. This allows for the experiment to have a much higher probability of having a large extent of natural laminar flow in flight.

  2. Design optimization applied in structural dynamics

    NARCIS (Netherlands)

    Akcay-Perdahcioglu, Didem; de Boer, Andries; van der Hoogt, Peter; Tiskarna, T

    2007-01-01

    This paper introduces the design optimization strategies, especially for structures which have dynamic constraints. Design optimization involves first the modeling and then the optimization of the problem. Utilizing the Finite Element (FE) model of a structure directly in an optimization process

  3. Design of microfluidic bioreactors using topology optimization

    DEFF Research Database (Denmark)

    Okkels, Fridolin; Bruus, Henrik

    2007-01-01

    We address the design of optimal reactors for supporting biological cultures using the method of topology optimization. For some years this method have been used to design various optimal microfluidic devices.1-4 We apply this method to distribute optimally biologic cultures within a flow...

  4. Optimal Design of Shock Tube Experiments for Parameter Inference

    KAUST Repository

    Bisetti, Fabrizio; Knio, Omar

    2014-01-01

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation

  5. Topology Optimization for Wave Propagation Problems with Experimental Validation

    DEFF Research Database (Denmark)

    Christiansen, Rasmus Ellebæk

    designed using the proposed method is provided. A novel approach for designing meta material slabs with selectively tuned negative refractive behavior is outlined. Numerical examples demonstrating the behavior of a slab under different conditions is provided. Results from an experimental studydemonstrating...... agreement with numerical predictions are presented. Finally an approach for designing acoustic wave shaping devices is treated. Three examples of applications are presented, a directional sound emission device, a wave splitting device and a flat focusing lens. Experimental results for the first two devices......This Thesis treats the development and experimental validation of density-based topology optimization methods for wave propagation problems. Problems in the frequency regime where design dimensions are between approximately one fourth and ten wavelengths are considered. All examples treat problems...

  6. Development of a novel pH sensor based upon Janus Green B immobilized on triacetyl cellulose membrane: Experimental design and optimization.

    Science.gov (United States)

    Chamkouri, Narges; Niazi, Ali; Zare-Shahabadi, Vali

    2016-03-05

    A novel pH optical sensor was prepared by immobilizing an azo dye called Janus Green B on the triacetylcellulose membrane. Condition of the dye solution used in the immobilization step, including concentration of the dye, pH, and duration were considered and optimized using the Box-Behnken design. The proposed sensor showed good behavior and precision (RSDpH range of 2.0-10.0. Advantages of this optical sensor include on-line applicability, no leakage, long-term stability (more than 6 months), fast response time (less than 1 min), high selectivity and sensitivity as well as good reversibility and reproducibility. Copyright © 2015. Published by Elsevier B.V.

  7. Design of acoustic devices by topology optimization

    DEFF Research Database (Denmark)

    Sigmund, Ole; Jensen, Jakob Søndergaard

    2003-01-01

    The goal of this study is to design and optimize structures and devices that are subjected to acoustic waves. Examples are acoustic lenses, sound walls, waveguides and loud speakers. We formulate the design problem as a topology optimization problem, i.e. distribute material in a design domain...... such that the acoustic response is optimized....

  8. A hydrometallurgical process for the recovery of terbium from fluorescent lamps: Experimental design, optimization of acid leaching process and process analysis.

    Science.gov (United States)

    Innocenzi, Valentina; Ippolito, Nicolò Maria; De Michelis, Ida; Medici, Franco; Vegliò, Francesco

    2016-12-15

    Terbium and rare earths recovery from fluorescent powders of exhausted lamps by acid leaching with hydrochloric acid was the objective of this study. In order to investigate the factors affecting leaching a series of experiments was performed in according to a full factorial plan with four variables and two levels (4 2 ). The factors studied were temperature, concentration of acid, pulp density and leaching time. Experimental conditions of terbium dissolution were optimized by statistical analysis. The results showed that temperature and pulp density were significant with a positive and negative effect, respectively. The empirical mathematical model deducted by experimental data demonstrated that terbium content was completely dissolved under the following conditions: 90 °C, 2 M hydrochloric acid and 5% of pulp density; while when the pulp density was 15% an extraction of 83% could be obtained at 90 °C and 5 M hydrochloric acid. Finally a flow sheet for the recovery of rare earth elements was proposed. The process was tested and simulated by commercial software for the chemical processes. The mass balance of the process was calculated: from 1 ton of initial powder it was possible to obtain around 160 kg of a concentrate of rare earths having a purity of 99%. The main rare earths elements in the final product was yttrium oxide (86.43%) following by cerium oxide (4.11%), lanthanum oxide (3.18%), europium oxide (3.08%) and terbium oxide (2.20%). The estimated total recovery of the rare earths elements was around 70% for yttrium and europium and 80% for the other rare earths. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Statistical experimental design for refractory coatings

    International Nuclear Information System (INIS)

    McKinnon, J.A.; Standard, O.C.

    2000-01-01

    The production of refractory coatings on metal casting moulds is critically dependent on the development of suitable rheological characteristics, such as viscosity and thixotropy, in the initial coating slurry. In this paper, the basic concepts of mixture design and analysis are applied to the formulation of a refractory coating, with illustration by a worked example. Experimental data of coating viscosity versus composition are fitted to a statistical model to obtain a reliable method of predicting the optimal formulation of the coating. Copyright (2000) The Australian Ceramic Society

  10. A novel homocystine-agarose adsorbent for separation and preconcentration of nickel in table salt and baking soda using factorial design optimization of the experimental conditions.

    Science.gov (United States)

    Hashemi, Payman; Rahmani, Zohreh

    2006-02-28

    Homocystine was for the first time, chemically linked to a highly cross-linked agarose support (Novarose) to be employed as a chelating adsorbent for preconcentration and AAS determination of nickel in table salt and baking soda. Nickel is quantitatively adsorbed on a small column packed with 0.25ml of the adsorbent, in a pH range of 5.5-6.5 and simply eluted with 5ml of a 1moll(-1) hydrochloric acid solution. A factorial design was used for optimization of the effects of five different variables on the recovery of nickel. The results indicated that the factors of flow rate and column length, and the interactions between pH and sample volume are significant. In the optimized conditions, the column could tolerate salt concentrations up to 0.5moll(-1) and sample volumes beyond 500ml. Matrix ions of Mg(2+) and Ca(2+), with a concentration of 200mgl(-1), and potentially interfering ions of Cd(2+), Cu(2+), Zn(2+) and Mn(2+), with a concentration of 10mgl(-1), did not have significant effect on the analyte's signal. Preconcentration factors up to 100 and a detection limit of 0.49mugl(-1), corresponding to an enrichment volume of 500ml, were obtained for the determination of the analyte by flame AAS. Application of the method to the determination of natural and spiked nickel in table salt and baking soda solutions resulted in quantitative recoveries. Direct ETAAS determination of nickel in the same samples was not possible because of a high background observed.

  11. A novel magnetic metal organic framework nanocomposite for extraction and preconcentration of heavy metal ions, and its optimization via experimental design methodology

    International Nuclear Information System (INIS)

    Taghizadeh, Mohsen; Asgharinezhad, Ali Akbar; Pooladi, Mohsen; Barzin, Mahnaz; Abbaszadeh, Abolfazl; Tadjarodi, Azadeh

    2013-01-01

    We describe a novel magnetic metal-organic framework (MOF) prepared from dithizone-modified Fe 3 O 4 nanoparticles and a copper-(benzene-1,3,5-tricarboxylate) MOF and its use in the preconcentration of Cd(II), Pb(II), Ni(II), and Zn(II) ions. The parameters affecting preconcentration were optimized by a Box-Behnken design through response surface methodology. Three variables (extraction time, amount of the magnetic sorbent, and pH value) were selected as the main factors affecting adsorption, while four variables (type, volume and concentration of the eluent; desorption time) were selected for desorption in the optimization study. Following preconcentration and elution, the ions were quantified by FAAS. The limits of detection are 0.12, 0.39, 0.98, and 1.2 ng mL −1 for Cd(II), Zn(II), Ni(II), and Pb(II) ions, respectively. The relative standard deviations were −1 of Cd(II), Zn(II), Ni(II), and Pb(II) ions. The adsorption capacities (in mg g −1 ) of this new MOF are 188 for Cd(II), 104 for Pb(II), 98 Ni(II), and 206 for Zn(II). The magnetic MOF nanocomposite has a higher capacity than the Fe 3 O 4 /dithizone conjugate. This magnetic MOF nanocomposite was successfully applied to the rapid extraction of trace quantities of heavy metal ions in fish, sediment, soil, and water samples. (author)

  12. Experimental Methods for the Analysis of Optimization Algorithms

    DEFF Research Database (Denmark)

    , computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on different...... in algorithm design, statistical design, optimization and heuristics, and most chapters provide theoretical background and are enriched with case studies. This book is written for researchers and practitioners in operations research and computer science who wish to improve the experimental assessment......In operations research and computer science it is common practice to evaluate the performance of optimization algorithms on the basis of computational results, and the experimental approach should follow accepted principles that guarantee the reliability and reproducibility of results. However...

  13. Quasi-Experimental Designs for Causal Inference

    Science.gov (United States)

    Kim, Yongnam; Steiner, Peter

    2016-01-01

    When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…

  14. Optimality models in the age of experimental evolution and genomics.

    Science.gov (United States)

    Bull, J J; Wang, I-N

    2010-09-01

    Optimality models have been used to predict evolution of many properties of organisms. They typically neglect genetic details, whether by necessity or design. This omission is a common source of criticism, and although this limitation of optimality is widely acknowledged, it has mostly been defended rather than evaluated for its impact. Experimental adaptation of model organisms provides a new arena for testing optimality models and for simultaneously integrating genetics. First, an experimental context with a well-researched organism allows dissection of the evolutionary process to identify causes of model failure--whether the model is wrong about genetics or selection. Second, optimality models provide a meaningful context for the process and mechanics of evolution, and thus may be used to elicit realistic genetic bases of adaptation--an especially useful augmentation to well-researched genetic systems. A few studies of microbes have begun to pioneer this new direction. Incompatibility between the assumed and actual genetics has been demonstrated to be the cause of model failure in some cases. More interestingly, evolution at the phenotypic level has sometimes matched prediction even though the adaptive mutations defy mechanisms established by decades of classic genetic studies. Integration of experimental evolutionary tests with genetics heralds a new wave for optimality models and their extensions that does not merely emphasize the forces driving evolution.

  15. Experimental and numerical comparison of absorption optimization in small rooms

    DEFF Research Database (Denmark)

    Wincentz, Jakob Nygård; Garcia, Julian Martinez-Villalba; Jeong, Cheol-Ho

    2016-01-01

    the Schroeder frequency. This project investigates experimentally changes in the room acoustic parameters by altering the positioning and orientation of porous materials in a small room, which are compared with finite element method (FEM) simulations. FEM is able to take into account the exact room geometry......, boundary conditions, and phase information providing accuracy at low frequencies. Good agreements are found between measurements and simulations, confirming that FEM can be used as a design tool for optimizing absorption and acoustic parameters in small rooms...

  16. Sequential experimental design based generalised ANOVA

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  17. Optimal Design of Stiffeners for Bucket Foundations

    DEFF Research Database (Denmark)

    Courtney, William Tucker; Stolpe, Mathias; Buhl, Thomas

    2015-01-01

    Tosca Structure coupled with the finite element software Abaqus. The solutions to these optimization problems are then manually interpreted as a new design concept. Results show that shape optimization of the initial design can reduce stress concentrations by 38%. Additionally, topology optimization has...

  18. Network inference via adaptive optimal design

    Directory of Open Access Journals (Sweden)

    Stigter Johannes D

    2012-09-01

    Full Text Available Abstract Background Current research in network reverse engineering for genetic or metabolic networks very often does not include a proper experimental and/or input design. In this paper we address this issue in more detail and suggest a method that includes an iterative design of experiments based, on the most recent data that become available. The presented approach allows a reliable reconstruction of the network and addresses an important issue, i.e., the analysis and the propagation of uncertainties as they exist in both the data and in our own knowledge. These two types of uncertainties have their immediate ramifications for the uncertainties in the parameter estimates and, hence, are taken into account from the very beginning of our experimental design. Findings The method is demonstrated for two small networks that include a genetic network for mRNA synthesis and degradation and an oscillatory network describing a molecular network underlying adenosine 3’-5’ cyclic monophosphate (cAMP as observed in populations of Dyctyostelium cells. In both cases a substantial reduction in parameter uncertainty was observed. Extension to larger scale networks is possible but needs a more rigorous parameter estimation algorithm that includes sparsity as a constraint in the optimization procedure. Conclusion We conclude that a careful experiment design very often (but not always pays off in terms of reliability in the inferred network topology. For large scale networks a better parameter estimation algorithm is required that includes sparsity as an additional constraint. These algorithms are available in the literature and can also be used in an adaptive optimal design setting as demonstrated in this paper.

  19. Application of experimental design and derivative spectrophotometry methods in optimization and analysis of biosorption of binary mixtures of basic dyes from aqueous solutions.

    Science.gov (United States)

    Asfaram, Arash; Ghaedi, Mehrorang; Ghezelbash, Gholam Reza; Pepe, Francesco

    2017-05-01

    Simultaneous biosorption of malachite green (MG) and crystal violet (CV) on biosorbent Yarrowia lipolytica ISF7 was studied. An appropriate derivative spectrophotometry technique was used to evaluate the concentration of each dye in binary solutions, despite significant interferences in visible light absorbances. The effects of pH, temperature, growth time, initial MG and CV concentration in batch experiments were assessed using Design of Experiment (DOE) according to central composite second order response surface methodology (RSM). The analysis showed that the greatest biosorption efficiency (>99% for both dyes) can be obtained at pH 7.0, T=28°C, 24h mixing and 20mgL -1 initial concentrations for both MG and CV dyes. The quadratic constructed equation ability for fitting experimental data is judged based on criterions like R 2 values, significant p and lack-of-fit value strongly confirm its high adequacy and applicability for prediction of revel behavior of the system under study. The proposed model showed very high correlation coefficients (R 2 =0.9997 for CV and R 2 =0.9989 for MG), while supported by closeness of predicted and experimental value. A kinetic analysis was carried out, showing that for both dyes a pseudo-second order kinetic model adequately describes the available data. The Langmuir isotherm model in single and binary components has better performance for description of dyes biosorption with maximum monolayer biosorption capacity of 59.4 and 62.7mgg -1 in single component and 46.4 and 50.0mgg -1 for CV and MB in binary components, respectively. The surface structure of biosorbents and the possible biosorbents-dyes interactions between were also evaluated by Fourier transform infrared (FT-IR) spectroscopy and scanning electron microscopy (SEM). The values of thermodynamic parameters including ΔG° and ΔH° strongly confirm which method is spontaneous and endothermic. Copyright © 2017. Published by Elsevier Inc.

  20. Experimental designs for autoregressive models applied to industrial maintenance

    International Nuclear Information System (INIS)

    Amo-Salas, M.; López-Fidalgo, J.; Pedregal, D.J.

    2015-01-01

    Some time series applications require data which are either expensive or technically difficult to obtain. In such cases scheduling the points in time at which the information should be collected is of paramount importance in order to optimize the resources available. In this paper time series models are studied from a new perspective, consisting in the use of Optimal Experimental Design setup to obtain the best times to take measurements, with the principal aim of saving costs or discarding useless information. The model and the covariance function are expressed in an explicit form to apply the usual techniques of Optimal Experimental Design. Optimal designs for various approaches are computed and their efficiencies are compared. The methods working in an application of industrial maintenance of a critical piece of equipment at a petrochemical plant are shown. This simple model allows explicit calculations in order to show openly the procedure to find the correlation structure, needed for computing the optimal experimental design. In this sense the techniques used in this paper to compute optimal designs may be transferred to other situations following the ideas of the paper, but taking into account the increasing difficulty of the procedure for more complex models. - Highlights: • Optimal experimental design theory is applied to AR models to reduce costs. • The first observation has an important impact on any optimal design. • Either the lack of precision or small starting observations claim for large times. • Reasonable optimal times were obtained relaxing slightly the efficiency. • Optimal designs were computed in a predictive maintenance context

  1. Vehicle systems design optimization study

    Energy Technology Data Exchange (ETDEWEB)

    Gilmour, J. L.

    1980-04-01

    The optimization of an electric vehicle layout requires a weight distribution in the range of 53/47 to 62/38 in order to assure dynamic handling characteristics comparable to current production internal combustion engine vehicles. It is possible to achieve this goal and also provide passenger and cargo space comparable to a selected current production sub-compact car either in a unique new design or by utilizing the production vehicle as a base. Necessary modification of the base vehicle can be accomplished without major modification of the structure or running gear. As long as batteries are as heavy and require as much space as they currently do, they must be divided into two packages - one at front under the hood and a second at the rear under the cargo area - in order to achieve the desired weight distribution. The weight distribution criteria requires the placement of batteries at the front of the vehicle even when the central tunnel is used for the location of some batteries. The optimum layout has a front motor and front wheel drive. This configuration provides the optimum vehicle dynamic handling characteristics and the maximum passsenger and cargo space for a given size vehicle.

  2. Application of experimental design in examination of the dissolution rate of carbamazepine from formulations: Characterization of the optimal formulation by DSC, TGA, FT-IR and PXRD analysis

    Directory of Open Access Journals (Sweden)

    Krstić Marko

    2015-01-01

    Full Text Available Poor solubility is one of the key reasons for the poor bioavailability of these drugs. This paper displays a formulation of a solid surfactant system with carbamazepine, in order to increase its dissolution rate. Solid state surfactant systems are formed by application of fractal experimental design. Poloxamer 237 and Poloxamer 338 were used as surfactants and Brij® 35 was used as the co-surfactant. The ratios of the excipients and carbamazepine were varied and their effects on the dissolution rate of carbamazepine were examined. Moreover, the effects of the addition of natural (diatomite and a synthetic adsorbent carrier (Neusiline UFL2 on the dissolution rate of carbamazepine were also tested. The prepared surfactant systems were characterized and the influence of the excipients on possible changes of the polymorphous form of carbamazepine examined by application of analytical techniques (DSC, TGA, FT-IR, PXRD. It was determined that an appropriate selection of the excipient type and ratio could provide a significant increase in the carbamazepine dissolution rate. By application of analytical techniques, it was found that that the employed excipients induce a transition of carbamazepine into the amorphous form and that the selected sample was stable for three months, when kept under ambient conditions. [Projekat Ministarstva nauke Republike Srbije, br. TR34007

  3. Design Optimization of Internal Flow Devices

    DEFF Research Database (Denmark)

    Madsen, Jens Ingemann

    The power of computational fluid dynamics is boosted through the use of automated design optimization methodologies. The thesis considers both derivative-based search optimization and the use of response surface methodologies.......The power of computational fluid dynamics is boosted through the use of automated design optimization methodologies. The thesis considers both derivative-based search optimization and the use of response surface methodologies....

  4. WE-AB-BRB-01: Development of a Probe-Format Graphite Calorimeter for Practical Clinical Dosimetry: Numerical Design Optimization, Prototyping, and Experimental Proof-Of-Concept

    International Nuclear Information System (INIS)

    Renaud, J; Seuntjens, J; Sarfehnia, A

    2015-01-01

    Purpose: In this work, the feasibility of performing absolute dose to water measurements using a constant temperature graphite probe calorimeter (GPC) in a clinical environment is established. Methods: A numerical design optimization study was conducted by simulating the heat transfer in the GPC resulting from irradiation using a finite element method software package. The choice of device shape, dimensions, and materials was made to minimize the heat loss in the sensitive volume of the GPC. The resulting design, which incorporates a novel aerogel-based thermal insulator, and 15 temperature sensitive resistors capable of both Joule heating and measuring temperature, was constructed in house. A software based process controller was developed to stabilize the temperatures of the GPC’s constituent graphite components to within a few 10’s of µK. This control system enables the GPC to operate in either the quasi-adiabatic or isothermal mode, two well-known, and independent calorimetry techniques. Absorbed dose to water measurements were made using these two methods under standard conditions in a 6 MV 1000 MU/min photon beam and subsequently compared against TG-51 derived values. Results: Compared to an expected dose to water of 76.9 cGy/100 MU, the average GPC-measured doses were 76.5 ± 0.5 and 76.9 ± 0.5 cGy/100 MU for the adiabatic and isothermal modes, respectively. The Monte Carlo calculated graphite to water dose conversion was 1.013, and the adiabatic heat loss correction was 1.003. With an overall uncertainty of about 1%, the most significant contributions were the specific heat capacity (type B, 0.8%) and the repeatability (type A, 0.6%). Conclusion: While the quasi-adiabatic mode of operation had been validated in previous work, this is the first time that the GPC has been successfully used isothermally. This proof-of-concept will serve as the basis for further study into the GPC’s application to small fields and MRI-linac dosimetry. This work has been

  5. Rationally reduced libraries for combinatorial pathway optimization minimizing experimental effort.

    Science.gov (United States)

    Jeschek, Markus; Gerngross, Daniel; Panke, Sven

    2016-03-31

    Rational flux design in metabolic engineering approaches remains difficult since important pathway information is frequently not available. Therefore empirical methods are applied that randomly change absolute and relative pathway enzyme levels and subsequently screen for variants with improved performance. However, screening is often limited on the analytical side, generating a strong incentive to construct small but smart libraries. Here we introduce RedLibs (Reduced Libraries), an algorithm that allows for the rational design of smart combinatorial libraries for pathway optimization thereby minimizing the use of experimental resources. We demonstrate the utility of RedLibs for the design of ribosome-binding site libraries by in silico and in vivo screening with fluorescent proteins and perform a simple two-step optimization of the product selectivity in the branched multistep pathway for violacein biosynthesis, indicating a general applicability for the algorithm and the proposed heuristics. We expect that RedLibs will substantially simplify the refactoring of synthetic metabolic pathways.

  6. Solar-Diesel Hybrid Power System Optimization and Experimental Validation

    Science.gov (United States)

    Jacobus, Headley Stewart

    As of 2008 1.46 billion people, or 22 percent of the World's population, were without electricity. Many of these people live in remote areas where decentralized generation is the only method of electrification. Most mini-grids are powered by diesel generators, but new hybrid power systems are becoming a reliable method to incorporate renewable energy while also reducing total system cost. This thesis quantifies the measurable Operational Costs for an experimental hybrid power system in Sierra Leone. Two software programs, Hybrid2 and HOMER, are used during the system design and subsequent analysis. Experimental data from the installed system is used to validate the two programs and to quantify the savings created by each component within the hybrid system. This thesis bridges the gap between design optimization studies that frequently lack subsequent validation and experimental hybrid system performance studies.

  7. Design Optimization Toolkit: Users' Manual

    Energy Technology Data Exchange (ETDEWEB)

    Aguilo Valentin, Miguel Alejandro [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Computational Solid Mechanics and Structural Dynamics

    2014-07-01

    The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLAB command window.

  8. Flat-plate photovoltaic array design optimization

    Science.gov (United States)

    Ross, R. G., Jr.

    1980-01-01

    An analysis is presented which integrates the results of specific studies in the areas of photovoltaic structural design optimization, optimization of array series/parallel circuit design, thermal design optimization, and optimization of environmental protection features. The analysis is based on minimizing the total photovoltaic system life-cycle energy cost including repair and replacement of failed cells and modules. This approach is shown to be a useful technique for array optimization, particularly when time-dependent parameters such as array degradation and maintenance are involved.

  9. Comparison of optimal design methods in inverse problems

    International Nuclear Information System (INIS)

    Banks, H T; Holm, K; Kappel, F

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst–Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667–77; De Gaetano A and Arino O 2000 J. Math. Biol. 40 136–68; Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979–90)

  10. Comparison of optimal design methods in inverse problems

    Science.gov (United States)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  11. Design of an Optimal Biorefinery

    DEFF Research Database (Denmark)

    Nawaz, Muhammad; Zondervan, Edwin; Woodley, John

    2011-01-01

    In this paper we propose a biorefinery optimization model that can be used to find the optimal processing route for the production of ethanol, butanol, succinic acid and blends of these chemicals with fossil fuel based gasoline. The approach unites transshipment models with a superstructure...

  12. Optimal design criteria - prediction vs. parameter estimation

    Science.gov (United States)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  13. Collaborative Systems Driven Aircraft Configuration Design Optimization

    OpenAIRE

    Shiva Prakasha, Prajwal; Ciampa, Pier Davide; Nagel, Björn

    2016-01-01

    A Collaborative, Inside-Out Aircraft Design approach is presented in this paper. An approach using physics based analysis to evaluate the correlations between the airframe design, as well as sub-systems integration from the early design process, and to exploit the synergies within a simultaneous optimization process. Further, the disciplinary analysis modules involved in the optimization task are located in different organization. Hence, the Airframe and Subsystem design tools are integrated ...

  14. Transportation package design using numerical optimization

    International Nuclear Information System (INIS)

    Harding, D.C.; Witkowski, W.R.

    1991-01-01

    The purpose of this overview is twofold: first, to outline the theory and basic elements of numerical optimization; and second, to show how numerical optimization can be applied to the transportation packaging industry and used to increase efficiency and safety of radioactive and hazardous material transportation packages. A more extensive review of numerical optimization and its applications to radioactive material transportation package design was performed previously by the authors (Witkowski and Harding 1992). A proof-of-concept Type B package design is also presented as a simplified example of potential improvements achievable using numerical optimization in the design process

  15. Optimization of fruit punch using mixture design.

    Science.gov (United States)

    Kumar, S Bharath; Ravi, R; Saraswathi, G

    2010-01-01

    A highly acceptable dehydrated fruit punch was developed with selected fruits, namely lemon, orange, and mango, using a mixture design and optimization technique. The fruit juices were freeze dried, powdered, and used in the reconstitution studies. Fruit punches were prepared according to the experimental design combinations (total 10) based on a mixture design and then subjected to sensory evaluation for acceptability. Response surfaces of sensory attributes were also generated as a function of fruit juices. Analysis of data revealed that the fruit punch prepared using 66% of mango, 33% of orange, and 1% of lemon had highly desirable sensory scores for color (6.00), body (5.92), sweetness (5.68), and pleasantness (5.94). The aroma pattern of individual as well as combinations of fruit juices were also analyzed by electronic nose. The electronic nose could discriminate the aroma patterns of individual as well as fruit juice combinations by mixture design. The results provide information on the sensory quality of best fruit punch formulations liked by the consumer panel based on lemon, orange, and mango.

  16. Optimal design of lossy bandgap structures

    DEFF Research Database (Denmark)

    Jensen, Jakob Søndergaard

    2004-01-01

    The method of topology optimization is used to design structures for wave propagation with one lossy material component. Optimized designs for scalar elastic waves are presented for mininimum wave transmission as well as for maximum wave energy dissipation. The structures that are obtained...... are of the 1D or 2D bandgap type depending on the objective and the material parameters....

  17. Design of Thermal Systems Using Topology Optimization

    DEFF Research Database (Denmark)

    Haertel, Jan Hendrik Klaas

    printeddry-cooled power plant condensers using a simpliffed thermouid topology optimizationmodel is presented in another study. A benchmarking of the optimized geometriesagainst a conventional heat exchanger design is conducted and the topologyoptimized designs show a superior performance. A thermouid......The goalof this thesis is to apply topology optimization to the design of differentthermal systems such as heat sinks and heat exchangers in order to improve thethermal performance of these systems compared to conventional designs. Thedesign of thermal systems is a complex task that has...... of optimized designs are presentedwithin this thesis.  The maincontribution of the thesis is the development of several numerical optimizationmodels that are applied to different design challenges within thermalengineering.  Topology optimization isapplied in an industrial project to design the heat rejection...

  18. Design and fabrication of topologically optimized structures;

    DEFF Research Database (Denmark)

    Feringa, Jelle; Søndergaard, Asbjørn

    2012-01-01

    Integral structural optimization and fabrication seeks the synthesis of two original approaches; that of topological optimization (TO) and robotic hotwire cutting (HWC) (Mcgee 2011). TO allows for the reduction of up to 70% of the volume of concrete to support a given structure (Sondergaard...... & Dombernowsky 2011). A strength of the method is that it allows to come up with structural designs that lie beyond the grasp of traditional means of design. A design space is a discretized volume, delimiting where the optimization will take place. The number of cells used to discretize the design space thus...

  19. Experimental Designs Exercises and Solutions

    CERN Document Server

    Kabe, DG

    2007-01-01

    This volume provides a collection of exercises together with their solutions in design and analysis of experiments. The theoretical results, essential for understanding, are given first. These exercises have been collected during the authors teaching courses over a long period of time. These are particularly helpful to the students studying the design of experiments and instructors and researchers engaged in the teaching and research of design by experiment.

  20. EBTS: DESIGN AND EXPERIMENTAL STUDY

    International Nuclear Information System (INIS)

    PIKIN, A.; ALESSI, J.; BEEBE, E.; KPONOU, A.; PRELEC, K.; KUZNETSOV, G.; TIUNOV, M.

    2000-01-01

    Experimental study of the BNL Electron Beam Test Stand (EBTS), which is a prototype of the Relativistic Heavy Ion Collider (RHIC) Electron Beam Ion Source (EBIS), is currently underway. The basic physics and engineering aspects of a high current EBIS implemented in EBTS are outlined and construction of its main systems is presented. Efficient transmission of a 10 A electron beam through the ion trap has been achieved. Experimental results on generation of multiply charged ions with both continuous gas and external ion injection confirm stable operation of the ion trap

  1. From Cookbook to Experimental Design

    Science.gov (United States)

    Flannagan, Jenny Sue; McMillan, Rachel

    2009-01-01

    Developing expertise, whether from cook to chef or from student to scientist, occurs over time and requires encouragement, guidance, and support. One key goal of an elementary science program should be to move students toward expertise in their ability to design investigative questions. The ability to design a testable question is difficult for…

  2. Experimental broadband absorption enhancement in silicon nanohole structures with optimized complex unit cells.

    Science.gov (United States)

    Lin, Chenxi; Martínez, Luis Javier; Povinelli, Michelle L

    2013-09-09

    We design silicon membranes with nanohole structures with optimized complex unit cells that maximize broadband absorption. We fabricate the optimized design and measure the optical absorption. We demonstrate an experimental broadband absorption about 3.5 times higher than an equally-thick thin film.

  3. Enhancing product robustness in reliability-based design optimization

    International Nuclear Information System (INIS)

    Zhuang, Xiaotian; Pan, Rong; Du, Xiaoping

    2015-01-01

    Different types of uncertainties need to be addressed in a product design optimization process. In this paper, the uncertainties in both product design variables and environmental noise variables are considered. The reliability-based design optimization (RBDO) is integrated with robust product design (RPD) to concurrently reduce the production cost and the long-term operation cost, including quality loss, in the process of product design. This problem leads to a multi-objective optimization with probabilistic constraints. In addition, the model uncertainties associated with a surrogate model that is derived from numerical computation methods, such as finite element analysis, is addressed. A hierarchical experimental design approach, augmented by a sequential sampling strategy, is proposed to construct the response surface of product performance function for finding optimal design solutions. The proposed method is demonstrated through an engineering example. - Highlights: • A unifying framework for integrating RBDO and RPD is proposed. • Implicit product performance function is considered. • The design problem is solved by sequential optimization and reliability assessment. • A sequential sampling technique is developed for improving design optimization. • The comparison with traditional RBDO is provided

  4. Optimized design of low energy buildings

    DEFF Research Database (Denmark)

    Rudbeck, Claus Christian; Esbensen, Peter Kjær; Svendsen, Sv Aa Højgaard

    1999-01-01

    concern which can be seen during the construction of new buildings. People want energy-friendly solutions, but they should be economical optimized. An exonomical optimized building design with respect to energy consumption is the design with the lowest total cost (investment plus operational cost over its...... to evaluate different separate solutions when they interact in the building.When trying to optimize several parameters there is a need for a method, which will show the correct price-performance of each part of a building under design. The problem with not having such a method will first be showed...

  5. Optimization methods applied to hybrid vehicle design

    Science.gov (United States)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  6. Optimization of 3D Field Design

    Science.gov (United States)

    Logan, Nikolas; Zhu, Caoxiang

    2017-10-01

    Recent progress in 3D tokamak modeling is now leveraged to create a conceptual design of new external 3D field coils for the DIII-D tokamak. Using the IPEC dominant mode as a target spectrum, the Finding Optimized Coils Using Space-curves (FOCUS) code optimizes the currents and 3D geometry of multiple coils to maximize the total set's resonant coupling. The optimized coils are individually distorted in space, creating toroidal ``arrays'' containing a variety of shapes that often wrap around a significant poloidal extent of the machine. The generalized perturbed equilibrium code (GPEC) is used to determine optimally efficient spectra for driving total, core, and edge neoclassical toroidal viscosity (NTV) torque and these too provide targets for the optimization of 3D coil designs. These conceptual designs represent a fundamentally new approach to 3D coil design for tokamaks targeting desired plasma physics phenomena. Optimized coil sets based on plasma response theory will be relevant to designs for future reactors or on any active machine. External coils, in particular, must be optimized for reliable and efficient fusion reactor designs. Work supported by the US Department of Energy under DE-AC02-09CH11466.

  7. Quasi experimental designs in pharmacist intervention research.

    Science.gov (United States)

    Krass, Ines

    2016-06-01

    Background In the field of pharmacist intervention research it is often difficult to conform to the rigorous requirements of the "true experimental" models, especially the requirement of randomization. When randomization is not feasible, a practice based researcher can choose from a range of "quasi-experimental designs" i.e., non-randomised and at time non controlled. Objective The aim of this article was to provide an overview of quasi-experimental designs, discuss their strengths and weaknesses and to investigate their application in pharmacist intervention research over the previous decade. Results In the literature quasi experimental studies may be classified into five broad categories: quasi-experimental design without control groups; quasi-experimental design that use control groups with no pre-test; quasi-experimental design that use control groups and pre-tests; interrupted time series and stepped wedge designs. Quasi-experimental study design has consistently featured in the evolution of pharmacist intervention research. The most commonly applied of all quasi experimental designs in the practice based research literature are the one group pre-post-test design and the non-equivalent control group design i.e., (untreated control group with dependent pre-tests and post-tests) and have been used to test the impact of pharmacist interventions in general medications management as well as in specific disease states. Conclusion Quasi experimental studies have a role to play as proof of concept, in the pilot phases of interventions when testing different intervention components, especially in complex interventions. They serve to develop an understanding of possible intervention effects: while in isolation they yield weak evidence of clinical efficacy, taken collectively, they help build a body of evidence in support of the value of pharmacist interventions across different practice settings and countries. However, when a traditional RCT is not feasible for

  8. Application of machine/statistical learning, artificial intelligence and statistical experimental design for the modeling and optimization of methylene blue and Cd(ii) removal from a binary aqueous solution by natural walnut carbon.

    Science.gov (United States)

    Mazaheri, H; Ghaedi, M; Ahmadi Azqhandi, M H; Asfaram, A

    2017-05-10

    Analytical chemists apply statistical methods for both the validation and prediction of proposed models. Methods are required that are adequate for finding the typical features of a dataset, such as nonlinearities and interactions. Boosted regression trees (BRTs), as an ensemble technique, are fundamentally different to other conventional techniques, with the aim to fit a single parsimonious model. In this work, BRT, artificial neural network (ANN) and response surface methodology (RSM) models have been used for the optimization and/or modeling of the stirring time (min), pH, adsorbent mass (mg) and concentrations of MB and Cd 2+ ions (mg L -1 ) in order to develop respective predictive equations for simulation of the efficiency of MB and Cd 2+ adsorption based on the experimental data set. Activated carbon, as an adsorbent, was synthesized from walnut wood waste which is abundant, non-toxic, cheap and locally available. This adsorbent was characterized using different techniques such as FT-IR, BET, SEM, point of zero charge (pH pzc ) and also the determination of oxygen containing functional groups. The influence of various parameters (i.e. pH, stirring time, adsorbent mass and concentrations of MB and Cd 2+ ions) on the percentage removal was calculated by investigation of sensitive function, variable importance rankings (BRT) and analysis of variance (RSM). Furthermore, a central composite design (CCD) combined with a desirability function approach (DFA) as a global optimization technique was used for the simultaneous optimization of the effective parameters. The applicability of the BRT, ANN and RSM models for the description of experimental data was examined using four statistical criteria (absolute average deviation (AAD), mean absolute error (MAE), root mean square error (RMSE) and coefficient of determination (R 2 )). All three models demonstrated good predictions in this study. The BRT model was more precise compared to the other models and this showed

  9. Transportation package design using numerical optimization

    International Nuclear Information System (INIS)

    Harding, D.C.; Witkowski, W.R.

    1992-01-01

    The design of structures and engineering systems has always been an iterative process whose complexity was dependent upon the boundary conditions, constraints and available analytical tools. Transportation packaging design is no exception with structural, thermal and radiation shielding constraints based on regulatory hypothetical accident conditions. Transportation packaging design is often accomplished by a group of specialists, each designing a single component based on one or more simple criteria, pooling results with the group, evaluating the open-quotes pooledclose quotes design, and then reiterating the entire process until a satisfactory design is reached. The manual iterative methods used by the designer/analyst can be summarized in the following steps: design the part, analyze the part, interpret the analysis results, modify the part, and re-analyze the part. The inefficiency of this design practice and the frequently conservative result suggests the need for a more structured design methodology, which can simultaneously consider all of the design constraints. Numerical optimization is a structured design methodology whose maturity in development has allowed it to become a primary design tool in many industries. The purpose of this overview is twofold: first, to outline the theory and basic elements of numerical optimization; and second, to show how numerical optimization can be applied to the transportation packaging industry and used to increase efficiency and safety of radioactive and hazardous material transportation packages. A more extensive review of numerical optimization and its applications to radioactive material transportation package design was performed previously by the authors (Witkowski and Harding 1992). A proof-of-concept Type B package design is also presented as a simplified example of potential improvements achievable using numerical optimization in the design process

  10. Optimal Control Design for a Solar Greenhouse

    NARCIS (Netherlands)

    Ooteghem, van R.J.C.

    2010-01-01

    Abstract: An optimal climate control has been designed for a solar greenhouse to achieve optimal crop production with sustainable instead of fossil energy. The solar greenhouse extends a conventional greenhouse with an improved roof cover, ventilation with heat recovery, a heat pump, a heat

  11. Strategies for Optimal Design of Structural Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1992-01-01

    Reliability-based design of structural systems is considered. Especially systems where the reliability model is a series system of parallel systems are analysed. A sensitivity analysis for this class of problems is presented. Direct and sequential optimization procedures to solve the optimization...

  12. OSHA and Experimental Safety Design.

    Science.gov (United States)

    Sichak, Stephen, Jr.

    1983-01-01

    Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.…

  13. Minimum scale controlled topology optimization and experimental test of a micro thermal actuator

    DEFF Research Database (Denmark)

    Heo, S.; Yoon, Gil Ho; Kim, Y.Y.

    2008-01-01

    This paper is concerned with the optimal topology design, fabrication and test of a micro thermal actuator. Because the minimum scale was controlled during the design optimization process, the production yield rate of the actuator was improved considerably; alternatively, the optimization design ...... tested. The test showed that control over the minimum length scale in the design process greatly improves the yield rate and reduces the performance deviation....... without scale control resulted in a very low yield rate. Using the minimum scale controlling topology design method developed earlier by the authors, micro thermal actuators were designed and fabricated through a MEMS process. Moreover, both their performance and production yield were experimentally...

  14. A new efficient mixture screening design for optimization of media.

    Science.gov (United States)

    Rispoli, Fred; Shah, Vishal

    2009-01-01

    Screening ingredients for the optimization of media is an important first step to reduce the many potential ingredients down to the vital few components. In this study, we propose a new method of screening for mixture experiments called the centroid screening design. Comparison of the proposed design with Plackett-Burman, fractional factorial, simplex lattice design, and modified mixture design shows that the centroid screening design is the most efficient of all the designs in terms of the small number of experimental runs needed and for detecting high-order interaction among ingredients. (c) 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009.

  15. Vehicle systems design optimization study

    Science.gov (United States)

    Gilmour, J. L.

    1980-01-01

    The optimum vehicle configuration and component locations are determined for an electric drive vehicle based on using the basic structure of a current production subcompact vehicle. The optimization of an electric vehicle layout requires a weight distribution in the range of 53/47 to 62/38 in order to assure dynamic handling characteristics comparable to current internal combustion engine vehicles. Necessary modification of the base vehicle can be accomplished without major modification of the structure or running gear. As long as batteries are as heavy and require as much space as they currently do, they must be divided into two packages, one at front under the hood and a second at the rear under the cargo area, in order to achieve the desired weight distribution. The weight distribution criteria requires the placement of batteries at the front of the vehicle even when the central tunnel is used for the location of some batteries. The optimum layout has a front motor and front wheel drive. This configuration provides the optimum vehicle dynamic handling characteristics and the maximum passenger and cargo space for a given size vehicle.

  16. Systematic design of microstructures by topology optimization

    DEFF Research Database (Denmark)

    Sigmund, Ole

    2003-01-01

    The topology optimization method can be used to determine the material distribution in a design domain such that an objective function is maximized and constraints are fulfilled. The method which is based on Finite Element Analysis may be applied to all kinds of material distribution problems like...... extremal material design, sensor and actuator design and MEMS synthesis. The state-of-the-art in topology optimization will be reviewed and older as well as new applications in phononic and photonic crystals design will be presented....

  17. Optimal design of marine steam turbine

    International Nuclear Information System (INIS)

    Liu Chengyang; Yan Changqi; Wang Jianjun

    2012-01-01

    The marine steam turbine is one of the key equipment in marine power plant, and it tends to using high power steam turbine, which makes the steam turbine to be heavier and larger, it causes difficulties to the design and arrangement of the steam turbine, and the marine maneuverability is seriously influenced. Therefore, it is necessary to apply optimization techniques to the design of the steam turbine in order to achieve the minimum weight or volume by means of finding the optimum combination of design parameters. The math model of the marine steam turbine design calculation was established. The sensitivities of condenser pressure, power ratio of HP turbine with LP turbine, and the ratio of diameter with height at the end stage of LP turbine, which influence the weight of the marine steam turbine, were analyzed. The optimal design of the marine steam turbine, aiming at the weight minimization while satisfying the structure and performance constraints, was carried out with the hybrid particle swarm optimization algorithm. The results show that, steam turbine weight is reduced by 3.13% with the optimization scheme. Finally, the optimization results were analyzed, and the steam turbine optimization design direction was indicated. (authors)

  18. A design approach for integrating thermoelectric devices using topology optimization

    International Nuclear Information System (INIS)

    Soprani, S.; Haertel, J.H.K.; Lazarov, B.S.; Sigmund, O.; Engelbrecht, K.

    2016-01-01

    Highlights: • The integration of a thermoelectric (TE) cooler into a robotic tool is optimized. • Topology optimization is suggested as design tool for TE integrated systems. • A 3D optimization technique using temperature dependent TE properties is presented. • The sensitivity of the optimization process to the boundary conditions is studied. • A working prototype is constructed and compared to the model results. - Abstract: Efficient operation of thermoelectric devices strongly relies on the thermal integration into the energy conversion system in which they operate. Effective thermal integration reduces the temperature differences between the thermoelectric module and its thermal reservoirs, allowing the system to operate more efficiently. This work proposes and experimentally demonstrates a topology optimization approach as a design tool for efficient integration of thermoelectric modules into systems with specific design constraints. The approach allows thermal layout optimization of thermoelectric systems for different operating conditions and objective functions, such as temperature span, efficiency, and power recovery rate. As a specific application, the integration of a thermoelectric cooler into the electronics section of a downhole oil well intervention tool is investigated, with the objective of minimizing the temperature of the cooled electronics. Several challenges are addressed: ensuring effective heat transfer from the load, minimizing the thermal resistances within the integrated system, maximizing the thermal protection of the cooled zone, and enhancing the conduction of the rejected heat to the oil well. The design method incorporates temperature dependent properties of the thermoelectric device and other materials. The 3D topology optimization model developed in this work was used to design a thermoelectric system, complete with insulation and heat sink, that was produced and tested. Good agreement between experimental results and

  19. Interactive Reliability-Based Optimal Design

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle; Siemaszko, A.

    1994-01-01

    Interactive design/optimization of large, complex structural systems is considered. The objective function is assumed to model the expected costs. The constraints are reliability-based and/or related to deterministic code requirements. Solution of this optimization problem is divided in four main...... tasks, namely finite element analyses, sensitivity analyses, reliability analyses and application of an optimization algorithm. In the paper it is shown how these four tasks can be linked effectively and how existing information on design variables, Lagrange multipliers and the Hessian matrix can...

  20. Performative Computation-aided Design Optimization

    Directory of Open Access Journals (Sweden)

    Ming Tang

    2012-12-01

    Full Text Available This article discusses a collaborative research and teaching project between the University of Cincinnati, Perkins+Will’s Tech Lab, and the University of North Carolina Greensboro. The primary investigation focuses on the simulation, optimization, and generation of architectural designs using performance-based computational design approaches. The projects examine various design methods, including relationships between building form, performance and the use of proprietary software tools for parametric design.

  1. Data-driven design optimization for composite material characterization

    Science.gov (United States)

    John G. Michopoulos; John C. Hermanson; Athanasios Iliopoulos; Samuel G. Lambrakos; Tomonari Furukawa

    2011-06-01

    The main goal of the present paper is to demonstrate the value of design optimization beyond its use for structural shape determination in the realm of the constitutive characterization of anisotropic material systems such as polymer matrix composites with or without damage. The approaches discussed are based on the availability of massive experimental data...

  2. Chemicals-Based Formulation Design: Virtual Experimentations

    DEFF Research Database (Denmark)

    Conte, Elisa; Gani, Rafiqul

    2011-01-01

    This paper presents a systematic procedure for virtual experimentations related to the design of liquid formulated products. All the experiments that need to be performed when designing a liquid formulated product (lotion), such as ingredients selection and testing, solubility tests, property mea...... on the design of an insect repellent lotion will show that the software is an essential instrument in decision making, and that it reduces time and resources since experimental efforts can be focused on one or few product alternatives....

  3. DESIGN OPTIMIZATION METHOD USED IN MECHANICAL ENGINEERING

    Directory of Open Access Journals (Sweden)

    SCURTU Iacob Liviu

    2016-11-01

    Full Text Available This paper presents an optimization study in mechanical engineering. First part of the research describe the structural optimization method used, followed by the presentation of several optimization studies conducted in recent years. The second part of the paper presents the CAD modelling of an agricultural plough component. The beam of the plough is analysed using finite element method. The plough component is meshed in solid elements, and the load case which mimics the working conditions of agricultural equipment of this are created. The model is prepared to find the optimal structural design, after the FEA study of the model is done. The mass reduction of part is the criterion applied for this optimization study. The end of this research presents the final results and the model optimized shape.

  4. Design optimization for active twist rotor blades

    Science.gov (United States)

    Mok, Ji Won

    This dissertation introduces the process of optimizing active twist rotor blades in the presence of embedded anisotropic piezo-composite actuators. Optimum design of active twist blades is a complex task, since it involves a rich design space with tightly coupled design variables. The study presents the development of an optimization framework for active helicopter rotor blade cross-sectional design. This optimization framework allows for exploring a rich and highly nonlinear design space in order to optimize the active twist rotor blades. Different analytical components are combined in the framework: cross-sectional analysis (UM/VABS), an automated mesh generator, a beam solver (DYMORE), a three-dimensional local strain recovery module, and a gradient based optimizer within MATLAB. Through the mathematical optimization problem, the static twist actuation performance of a blade is maximized while satisfying a series of blade constraints. These constraints are associated with locations of the center of gravity and elastic axis, blade mass per unit span, fundamental rotating blade frequencies, and the blade strength based on local three-dimensional strain fields under worst loading conditions. Through pre-processing, limitations of the proposed process have been studied. When limitations were detected, resolution strategies were proposed. These include mesh overlapping, element distortion, trailing edge tab modeling, electrode modeling and foam implementation of the mesh generator, and the initial point sensibility of the current optimization scheme. Examples demonstrate the effectiveness of this process. Optimization studies were performed on the NASA/Army/MIT ATR blade case. Even though that design was built and shown significant impact in vibration reduction, the proposed optimization process showed that the design could be improved significantly. The second example, based on a model scale of the AH-64D Apache blade, emphasized the capability of this framework to

  5. Antimicrobial peptides design by evolutionary multiobjective optimization.

    Directory of Open Access Journals (Sweden)

    Giuseppe Maccari

    Full Text Available Antimicrobial peptides (AMPs are an abundant and wide class of molecules produced by many tissues and cell types in a variety of mammals, plant and animal species. Linear alpha-helical antimicrobial peptides are among the most widespread membrane-disruptive AMPs in nature, representing a particularly successful structural arrangement in innate defense. Recently, AMPs have received increasing attention as potential therapeutic agents, owing to their broad activity spectrum and their reduced tendency to induce resistance. The introduction of non-natural amino acids will be a key requisite in order to contrast host resistance and increase compound's life. In this work, the possibility to design novel AMP sequences with non-natural amino acids was achieved through a flexible computational approach, based on chemophysical profiles of peptide sequences. Quantitative structure-activity relationship (QSAR descriptors were employed to code each peptide and train two statistical models in order to account for structural and functional properties of alpha-helical amphipathic AMPs. These models were then used as fitness functions for a multi-objective evolutional algorithm, together with a set of constraints for the design of a series of candidate AMPs. Two ab-initio natural peptides were synthesized and experimentally validated for antimicrobial activity, together with a series of control peptides. Furthermore, a well-known Cecropin-Mellitin alpha helical antimicrobial hybrid (CM18 was optimized by shortening its amino acid sequence while maintaining its activity and a peptide with non-natural amino acids was designed and tested, demonstrating the higher activity achievable with artificial residues.

  6. Two polynomial representations of experimental design

    OpenAIRE

    Notari, Roberto; Riccomagno, Eva; Rogantin, Maria-Piera

    2007-01-01

    In the context of algebraic statistics an experimental design is described by a set of polynomials called the design ideal. This, in turn, is generated by finite sets of polynomials. Two types of generating sets are mostly used in the literature: Groebner bases and indicator functions. We briefly describe them both, how they are used in the analysis and planning of a design and how to switch between them. Examples include fractions of full factorial designs and designs for mixture experiments.

  7. Fusion blanket design and optimization techniques

    International Nuclear Information System (INIS)

    Gohar, Y.

    2005-01-01

    In fusion reactors, the blanket design and its characteristics have a major impact on the reactor performance, size, and economics. The selection and arrangement of the blanket materials, dimensions of the different blanket zones, and different requirements of the selected materials for a satisfactory performance are the main parameters, which define the blanket performance. These parameters translate to a large number of variables and design constraints, which need to be simultaneously considered in the blanket design process. This represents a major design challenge because of the lack of a comprehensive design tool capable of considering all these variables to define the optimum blanket design and satisfying all the design constraints for the adopted figure of merit and the blanket design criteria. The blanket design techniques of the First Wall/Blanket/Shield Design and Optimization System (BSDOS) have been developed to overcome this difficulty and to provide the state-of-the-art techniques and tools for performing blanket design and analysis. This report describes some of the BSDOS techniques and demonstrates its use. In addition, the use of the optimization technique of the BSDOS can result in a significant blanket performance enhancement and cost saving for the reactor design under consideration. In this report, examples are presented, which utilize an earlier version of the ITER solid breeder blanket design and a high power density self-cooled lithium blanket design for demonstrating some of the BSDOS blanket design techniques

  8. Controller Design Automation for Aeroservoelastic Design Optimization of Wind Turbines

    NARCIS (Netherlands)

    Ashuri, T.; Van Bussel, G.J.W.; Zaayer, M.B.; Van Kuik, G.A.M.

    2010-01-01

    The purpose of this paper is to integrate the controller design of wind turbines with structure and aerodynamic analysis and use the final product in the design optimization process (DOP) of wind turbines. To do that, the controller design is automated and integrated with an aeroelastic simulation

  9. Design Buildings Optimally: A Lifecycle Assessment Approach

    KAUST Repository

    Hosny, Ossama

    2013-01-01

    This paper structures a generic framework to support optimum design for multi-buildings in desert environment. The framework is targeting an environmental friendly design with minimum lifecycle cost, using Genetic Algorithms (Gas). GAs function through a set of success measures which evaluates the design, formulates a proper objective, and reflects possible tangible/intangible constraints. The framework optimizes the design and categorizes it under a certain environmental category at minimum Life Cycle Cost (LCC). It consists of three main modules: (1) a custom Building InformationModel (BIM) for desert buildings with a compatibility checker as a central interactive database; (2) a system evaluator module to evaluate the proposed success measures for the design; and (3) a GAs optimization module to ensure optimum design. The framework functions through three levels: the building components, integrated building, and multi-building levels. At the component level the design team should be able to select components in a designed sequence to ensure compatibility among various components, while at the building level; the team can relatively locate and orient each individual building. Finally, at the multi-building (compound) level the whole design can be evaluated using success measures of natural light, site capacity, shading impact on natural lighting, thermal change, visual access and energy saving. The framework through genetic algorithms optimizes the design by determining proper types of building components and relative buildings locations and orientations which ensure categorizing the design under a specific category or meet certain preferences at minimum lifecycle cost.

  10. Optimal design of robust piezoelectric unimorph microgrippers

    DEFF Research Database (Denmark)

    Ruiz, David; Díaz-Molina, Alex; Sigmund, Ole

    2018-01-01

    Topology optimization can be used to design piezoelectric actuators by simultaneous design of host structure and polarization profile. Subsequent micro-scale fabrication leads us to overcome important manufacturing limitations: difficulties in placing a piezoelectric layer on both top and bottom...

  11. Optimization of straight-sided spline design

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2011-01-01

    and the subject of improving the design. The present paper concentrates on the optimization of splines and the predictions of stress concentrations, which are determined by finite element analysis (FEA). Using different design modifications, that do not change the spline load carrying capacity, it is shown...

  12. Optimal control design for a solar greenhouse

    NARCIS (Netherlands)

    Ooteghem, van R.J.C.

    2007-01-01

    The research of this thesis was part of a larger project aiming at the design of a greenhouse and an associated climate control that achieves optimal crop production with sustainable instead of fossil energy. This so called solar greenhouse design extends a conventional greenhouse with an improved

  13. Performance-based Pareto optimal design

    NARCIS (Netherlands)

    Sariyildiz, I.S.; Bittermann, M.S.; Ciftcioglu, O.

    2008-01-01

    A novel approach for performance-based design is presented, where Pareto optimality is pursued. Design requirements may contain linguistic information, which is difficult to bring into computation or make consistent their impartial estimations from case to case. Fuzzy logic and soft computing are

  14. Autonomous entropy-based intelligent experimental design

    Science.gov (United States)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  15. Some experimental aspects of optimality theoretic pragmatics

    NARCIS (Netherlands)

    Blutner, R.; Németh T., E.; Bibok, K.

    2010-01-01

    The article has three main concerns: (i) it gives a concise introduction into optimality-theoretic pragmatics; (ii) it discusses the relation to alternative accounts (relevance theory and Levinson's theory of presumptive meanings); (iii) it reviews recent findings concerning the psychological

  16. Design optimization of shell-and-tube heat exchangers using single objective and multiobjective particle swarm optimization

    International Nuclear Information System (INIS)

    Elsays, Mostafa A.; Naguib Aly, M; Badawi, Alya A.

    2010-01-01

    The Particle Swarm Optimization (PSO) algorithm is used to optimize the design of shell-and-tube heat exchangers and determine the optimal feasible solutions so as to eliminate trial-and-error during the design process. The design formulation takes into account the area and the total annual cost of heat exchangers as two objective functions together with operating as well as geometrical constraints. The Nonlinear Constrained Single Objective Particle Swarm Optimization (NCSOPSO) algorithm is used to minimize and find the optimal feasible solution for each of the nonlinear constrained objective functions alone, respectively. Then, a novel Nonlinear Constrained Mult-objective Particle Swarm Optimization (NCMOPSO) algorithm is used to minimize and find the Pareto optimal solutions for both of the nonlinear constrained objective functions together. The experimental results show that the two algorithms are very efficient, fast and can find the accurate optimal feasible solutions of the shell and tube heat exchangers design optimization problem. (orig.)

  17. HAMMLAB 1999 experimental control room: design - design rationale - experiences

    International Nuclear Information System (INIS)

    Foerdestroemmen, N. T.; Meyer, B. D.; Saarni, R.

    1999-01-01

    A presentation of HAMMLAB 1999 experimental control room, and the accumulated experiences gathered in the areas of design and design rationale as well as user experiences. It is concluded that HAMMLAB 1999 experimental control room is a realistic, compact and efficient control room well suited as an Advanced NPP Control Room (ml)

  18. Instrument design optimization with computational methods

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Michael H. [Old Dominion Univ., Norfolk, VA (United States)

    2017-08-01

    Using Finite Element Analysis to approximate the solution of differential equations, two different instruments in experimental Hall C at the Thomas Jefferson National Accelerator Facility are analyzed. The time dependence of density uctuations from the liquid hydrogen (LH2) target used in the Qweak experiment (2011-2012) are studied with Computational Fluid Dynamics (CFD) and the simulation results compared to data from the experiment. The 2.5 kW liquid hydrogen target was the highest power LH2 target in the world and the first to be designed with CFD at Jefferson Lab. The first complete magnetic field simulation of the Super High Momentum Spectrometer (SHMS) is presented with a focus on primary electron beam deflection downstream of the target. The SHMS consists of a superconducting horizontal bending magnet (HB) and three superconducting quadrupole magnets. The HB allows particles scattered at an angle of 5:5 deg to the beam line to be steered into the quadrupole magnets which make up the optics of the spectrometer. Without mitigation, remnant fields from the SHMS may steer the unscattered beam outside of the acceptable envelope on the beam dump and limit beam operations at small scattering angles. A solution is proposed using optimal placement of a minimal amount of shielding iron around the beam line.

  19. Design and optimization of thermoacoustic devices

    International Nuclear Information System (INIS)

    Babaei, Hadi; Siddiqui, Kamran

    2008-01-01

    Thermoacoustics deals with the conversion of heat energy into sound energy and vice versa. It is a new and emerging technology which has a strong potential towards the development of sustainable and renewable energy systems by utilizing waste heat or solar energy. Although simple to fabricate, the designing of thermoacoustic devices is very challenging. In the present study, a comprehensive design and optimization algorithm is developed for designing thermoacoustic devices. The unique feature of the present algorithm is its ability to design thermoacoustically-driven thermoacoustic refrigerators that can serve as sustainable refrigeration systems. In addition, new features based on the energy balance are also included to design individual thermoacoustic engines and acoustically-driven thermoacoustic refrigerators. As a case study, a thermoacoustically-driven thermoacoustic refrigerator has been designed and optimized based on the developed algorithm. The results from the algorithm are in good agreement with that obtained from the computer code DeltaE

  20. Chemical-Based Formulation Design: Virtual Experimentation

    DEFF Research Database (Denmark)

    Conte, Elisa; Gani, Rafiqul

    This paper presents a software, the virtual Product-Process Design laboratory (virtual PPD-lab) and the virtual experimental scenarios for design/verification of consumer oriented liquid formulated products where the software can be used. For example, the software can be employed for the design......, the additives and/or their mixtures (formulations). Therefore, the experimental resources can focus on a few candidate product formulations to find the best product. The virtual PPD-lab allows various options for experimentations related to design and/or verification of the product. For example, the selection...... design, model adaptation). All of the above helps to perform virtual experiments by blending chemicals together and observing their predicted behaviour. The paper will highlight the application of the virtual PPD-lab in the design and/or verification of different consumer products (paint formulation...

  1. A Statistical Approach to Optimizing Concrete Mixture Design

    OpenAIRE

    Ahmad, Shamsad; Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicate...

  2. Optimization and Design of Experimental Bipedal Robot

    Czech Academy of Sciences Publication Activity Database

    Zezula, P.; Grepl, Robert

    -, A1 (2005), s. 293-300 ISSN 1210-2717. [Mechatronics, Robotics and Biomechanics 2005. Třešť, 26.09.2005-29.09.2005] Institutional research plan: CEZ:AV0Z20760514 Keywords : walking machine * biped robot * computational modelling Subject RIV: JD - Computer Applications, Robotics

  3. The optimal design of UAV wing structure

    Science.gov (United States)

    Długosz, Adam; Klimek, Wiktor

    2018-01-01

    The paper presents an optimal design of UAV wing, made of composite materials. The aim of the optimization is to improve strength and stiffness together with reduction of the weight of the structure. Three different types of functionals, which depend on stress, stiffness and the total mass are defined. The paper presents an application of the in-house implementation of the evolutionary multi-objective algorithm in optimization of the UAV wing structure. Values of the functionals are calculated on the basis of results obtained from numerical simulations. Numerical FEM model, consisting of different composite materials is created. Adequacy of the numerical model is verified by results obtained from the experiment, performed on a tensile testing machine. Examples of multi-objective optimization by means of Pareto-optimal set of solutions are presented.

  4. Optimization of reload core design for PWR

    International Nuclear Information System (INIS)

    Shen Wei; Xie Zhongsheng; Yin Banghua

    1995-01-01

    A direct efficient optimization technique has been effected for automatically optimizing the reload of PWR. The objective functions include: maximization of end-of-cycle (EOC) reactivity and maximization of average discharge burnup. The fuel loading optimization and burnable poison (BP) optimization are separated into two stages by using Haling principle. In the first stage, the optimum fuel reloading pattern without BP is determined by the linear programming method using enrichments as control variable, while in the second stage the optimum BP allocation is determined by the flexible tolerance method using the number of BP rods as control variable. A practical and efficient PWR reloading optimization program based on above theory has been encoded and successfully applied to Qinshan Nuclear Power Plant (QNP) cycle 2 reloading design

  5. Regression analysis as a design optimization tool

    Science.gov (United States)

    Perley, R.

    1984-01-01

    The optimization concepts are described in relation to an overall design process as opposed to a detailed, part-design process where the requirements are firmly stated, the optimization criteria are well established, and a design is known to be feasible. The overall design process starts with the stated requirements. Some of the design criteria are derived directly from the requirements, but others are affected by the design concept. It is these design criteria that define the performance index, or objective function, that is to be minimized within some constraints. In general, there will be multiple objectives, some mutually exclusive, with no clear statement of their relative importance. The optimization loop that is given adjusts the design variables and analyzes the resulting design, in an iterative fashion, until the objective function is minimized within the constraints. This provides a solution, but it is only the beginning. In effect, the problem definition evolves as information is derived from the results. It becomes a learning process as we determine what the physics of the system can deliver in relation to the desirable system characteristics. As with any learning process, an interactive capability is a real attriubute for investigating the many alternatives that will be suggested as learning progresses.

  6. Design and optimization of food processing conditions

    OpenAIRE

    Silva, C. L. M.

    1996-01-01

    The main research objectives of the group are the design and optimization of food processing conditions. Most of the work already developed is on the use of mathematical modeling of transport phenomena and quantification of degradation kinetics as two tools to optimize the final quality of thermally processed food products. Recently, we initiated a project with the main goal of studying the effects of freezing and frozen storage on orange and melon juice pectinesterase activity and q...

  7. The optimization design of nuclear measurement teaching equipment

    International Nuclear Information System (INIS)

    Tang Rulong; Qiu Xiaoping

    2008-01-01

    So far domestic student-oriented experimental nuclear measuring instruments are used only to measure object density, thickness or material level, and in the choice of sources activity is mostly about 10 mCi. this design will proposed a optimization program dealing with domestic situation. It discussed the radioactive sources activity, the structural design of sealed sources, such as the choice of the tested material in order to get a program optimization. The program used 1 mCi activity radioactive sources 137 Cs to reduce the radiation dose, and the measurement function was improved. So that the apparatus can measure density, thickness nad material level. (authors)

  8. Optimal Design of Shock Tube Experiments for Parameter Inference

    KAUST Repository

    Bisetti, Fabrizio

    2014-01-06

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation energies in the reaction rate expressions. The control parameters are the initial mixture composition and the temperature. The approach is based on first building a polynomial based surrogate model for the observables relevant to the shock tube experiments. Based on these surrogates, a novel MAP based approach is used to estimate the expected information gain in the proposed experiments, and to select the best experimental set-ups yielding the optimal expected information gains. The validity of the approach is tested using synthetic data generated by sampling the PC surrogate. We finally outline a methodology for validation using actual laboratory experiments, and extending experimental design methodology to the cases where the control parameters are noisy.

  9. Experimental toxicology: Issues of statistics, experimental design, and replication.

    Science.gov (United States)

    Briner, Wayne; Kirwan, Jeral

    2017-01-01

    The difficulty of replicating experiments has drawn considerable attention. Issues with replication occur for a variety of reasons ranging from experimental design to laboratory errors to inappropriate statistical analysis. Here we review a variety of guidelines for statistical analysis, design, and execution of experiments in toxicology. In general, replication can be improved by using hypothesis driven experiments with adequate sample sizes, randomization, and blind data collection techniques. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Dynamic optimization and adaptive controller design

    Science.gov (United States)

    Inamdar, S. R.

    2010-10-01

    In this work I present a new type of controller which is an adaptive tracking controller which employs dynamic optimization for optimizing current value of controller action for the temperature control of nonisothermal continuously stirred tank reactor (CSTR). We begin with a two-state model of nonisothermal CSTR which are mass and heat balance equations and then add cooling system dynamics to eliminate input multiplicity. The initial design value is obtained using local stability of steady states where approach temperature for cooling action is specified as a steady state and a design specification. Later we make a correction in the dynamics where material balance is manipulated to use feed concentration as a system parameter as an adaptive control measure in order to avoid actuator saturation for the main control loop. The analysis leading to design of dynamic optimization based parameter adaptive controller is presented. The important component of this mathematical framework is reference trajectory generation to form an adaptive control measure.

  11. Solid Rocket Motor Design Using Hybrid Optimization

    Directory of Open Access Journals (Sweden)

    Kevin Albarado

    2012-01-01

    Full Text Available A particle swarm/pattern search hybrid optimizer was used to drive a solid rocket motor modeling code to an optimal solution. The solid motor code models tapered motor geometries using analytical burn back methods by slicing the grain into thin sections along the axial direction. Grains with circular perforated stars, wagon wheels, and dog bones can be considered and multiple tapered sections can be constructed. The hybrid approach to optimization is capable of exploring large areas of the solution space through particle swarming, but is also able to climb “hills” of optimality through gradient based pattern searching. A preliminary method for designing tapered internal geometry as well as tapered outer mold-line geometry is presented. A total of four optimization cases were performed. The first two case studies examines designing motors to match a given regressive-progressive-regressive burn profile. The third case study studies designing a neutrally burning right circular perforated grain (utilizing inner and external geometry tapering. The final case study studies designing a linearly regressive burning profile for right circular perforated (tapered grains.

  12. RO-75, Reverse Osmosis Plant Design Optimization and Cost Optimization

    International Nuclear Information System (INIS)

    Glueckstern, P.; Reed, S.A.; Wilson, J.V.

    1999-01-01

    1 - Description of problem or function: RO75 is a program for the optimization of the design and economics of one- or two-stage seawater reverse osmosis plants. 2 - Method of solution: RO75 evaluates the performance of the applied membrane module (productivity and salt rejection) at assumed operating conditions. These conditions include the site parameters - seawater salinity and temperature, the membrane module operating parameters - pressure and product recovery, and the membrane module predicted long-term performance parameters - lifetime and long flux decline. RO75 calculates the number of first and second stage (if applied) membrane modules needed to obtain the required product capacity and quality and evaluates the required pumping units and the power recovery turbine (if applied). 3 - Restrictions on the complexity of the problem: The program does not optimize or design the membrane properties and the internal structure and flow characteristics of the membrane modules; it assumes operating characteristics defined by the membrane manufacturers

  13. Three-dimensional shape optimization of a cemented hip stem and experimental validations.

    Science.gov (United States)

    Higa, Masaru; Tanino, Hiromasa; Nishimura, Ikuya; Mitamura, Yoshinori; Matsuno, Takeo; Ito, Hiroshi

    2015-03-01

    This study proposes novel optimized stem geometry with low stress values in the cement using a finite element (FE) analysis combined with an optimization procedure and experimental measurements of cement stress in vitro. We first optimized an existing stem geometry using a three-dimensional FE analysis combined with a shape optimization technique. One of the most important factors in the cemented stem design is to reduce stress in the cement. Hence, in the optimization study, we minimized the largest tensile principal stress in the cement mantle under a physiological loading condition by changing the stem geometry. As the next step, the optimized stem and the existing stem were manufactured to validate the usefulness of the numerical models and the results of the optimization in vitro. In the experimental study, strain gauges were embedded in the cement mantle to measure the strain in the cement mantle adjacent to the stems. The overall trend of the experimental study was in good agreement with the results of the numerical study, and we were able to reduce the largest stress by more than 50% in both shape optimization and strain gauge measurements. Thus, we could validate the usefulness of the numerical models and the results of the optimization using the experimental models. The optimization employed in this study is a useful approach for developing new stem designs.

  14. Evaluation of Frameworks for HSCT Design Optimization

    Science.gov (United States)

    Krishnan, Ramki

    1998-01-01

    This report is an evaluation of engineering frameworks that could be used to augment, supplement, or replace the existing FIDO 3.5 (Framework for Interdisciplinary Design and Optimization Version 3.5) framework. The report begins with the motivation for this effort, followed by a description of an "ideal" multidisciplinary design and optimization (MDO) framework. The discussion then turns to how each candidate framework stacks up against this ideal. This report ends with recommendations as to the "best" frameworks that should be down-selected for detailed review.

  15. Microstrip Antenna Design for Femtocell Coverage Optimization

    Directory of Open Access Journals (Sweden)

    Afaz Uddin Ahmed

    2014-01-01

    Full Text Available A mircostrip antenna is designed for multielement antenna coverage optimization in femtocell network. Interference is the foremost concern for the cellular operator in vast commercial deployments of femtocell. Many techniques in physical, data link and network-layer are analysed and developed to settle down the interference issues. A multielement technique with self-configuration features is analyzed here for coverage optimization of femtocell. It also focuses on the execution of microstrip antenna for multielement configuration. The antenna is designed for LTE Band 7 by using standard FR4 dielectric substrate. The performance of the proposed antenna in the femtocell application is discussed along with results.

  16. Design optimization for cost and quality: The robust design approach

    Science.gov (United States)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  17. Considering RNAi experimental design in parasitic helminths.

    Science.gov (United States)

    Dalzell, Johnathan J; Warnock, Neil D; McVeigh, Paul; Marks, Nikki J; Mousley, Angela; Atkinson, Louise; Maule, Aaron G

    2012-04-01

    Almost a decade has passed since the first report of RNA interference (RNAi) in a parasitic helminth. Whilst much progress has been made with RNAi informing gene function studies in disparate nematode and flatworm parasites, substantial and seemingly prohibitive difficulties have been encountered in some species, hindering progress. An appraisal of current practices, trends and ideals of RNAi experimental design in parasitic helminths is both timely and necessary for a number of reasons: firstly, the increasing availability of parasitic helminth genome/transcriptome resources means there is a growing need for gene function tools such as RNAi; secondly, fundamental differences and unique challenges exist for parasite species which do not apply to model organisms; thirdly, the inherent variation in experimental design, and reported difficulties with reproducibility undermine confidence. Ideally, RNAi studies of gene function should adopt standardised experimental design to aid reproducibility, interpretation and comparative analyses. Although the huge variations in parasite biology and experimental endpoints make RNAi experimental design standardization difficult or impractical, we must strive to validate RNAi experimentation in helminth parasites. To aid this process we identify multiple approaches to RNAi experimental validation and highlight those which we deem to be critical for gene function studies in helminth parasites.

  18. Evolutionary optimization methods for accelerator design

    Science.gov (United States)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained

  19. Topology Optimization - Engineering Contribution to Architectural Design

    Science.gov (United States)

    Tajs-Zielińska, Katarzyna; Bochenek, Bogdan

    2017-10-01

    The idea of the topology optimization is to find within a considered design domain the distribution of material that is optimal in some sense. Material, during optimization process, is redistributed and parts that are not necessary from objective point of view are removed. The result is a solid/void structure, for which an objective function is minimized. This paper presents an application of topology optimization to multi-material structures. The design domain defined by shape of a structure is divided into sub-regions, for which different materials are assigned. During design process material is relocated, but only within selected region. The proposed idea has been inspired by architectural designs like multi-material facades of buildings. The effectiveness of topology optimization is determined by proper choice of numerical optimization algorithm. This paper utilises very efficient heuristic method called Cellular Automata. Cellular Automata are mathematical, discrete idealization of a physical systems. Engineering implementation of Cellular Automata requires decomposition of the design domain into a uniform lattice of cells. It is assumed, that the interaction between cells takes place only within the neighbouring cells. The interaction is governed by simple, local update rules, which are based on heuristics or physical laws. The numerical studies show, that this method can be attractive alternative to traditional gradient-based algorithms. The proposed approach is evaluated by selected numerical examples of multi-material bridge structures, for which various material configurations are examined. The numerical studies demonstrated a significant influence the material sub-regions location on the final topologies. The influence of assumed volume fraction on final topologies for multi-material structures is also observed and discussed. The results of numerical calculations show, that this approach produces different results as compared with classical one

  20. Instrument design and optimization using genetic algorithms

    International Nuclear Information System (INIS)

    Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter

    2006-01-01

    This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods

  1. Instrument design and optimization using genetic algorithms

    Science.gov (United States)

    Hölzel, Robert; Bentley, Phillip M.; Fouquet, Peter

    2006-10-01

    This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of "nonstandard" magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.

  2. Design and volume optimization of space structures

    KAUST Repository

    Jiang, Caigui

    2017-07-21

    We study the design and optimization of statically sound and materially efficient space structures constructed by connected beams. We propose a systematic computational framework for the design of space structures that incorporates static soundness, approximation of reference surfaces, boundary alignment, and geometric regularity. To tackle this challenging problem, we first jointly optimize node positions and connectivity through a nonlinear continuous optimization algorithm. Next, with fixed nodes and connectivity, we formulate the assignment of beam cross sections as a mixed-integer programming problem with a bilinear objective function and quadratic constraints. We solve this problem with a novel and practical alternating direction method based on linear programming relaxation. The capability and efficiency of the algorithms and the computational framework are validated by a variety of examples and comparisons.

  3. Design and volume optimization of space structures

    KAUST Repository

    Jiang, Caigui; Tang, Chengcheng; Seidel, Hans-Peter; Wonka, Peter

    2017-01-01

    We study the design and optimization of statically sound and materially efficient space structures constructed by connected beams. We propose a systematic computational framework for the design of space structures that incorporates static soundness, approximation of reference surfaces, boundary alignment, and geometric regularity. To tackle this challenging problem, we first jointly optimize node positions and connectivity through a nonlinear continuous optimization algorithm. Next, with fixed nodes and connectivity, we formulate the assignment of beam cross sections as a mixed-integer programming problem with a bilinear objective function and quadratic constraints. We solve this problem with a novel and practical alternating direction method based on linear programming relaxation. The capability and efficiency of the algorithms and the computational framework are validated by a variety of examples and comparisons.

  4. Particle Swarm Optimization for Outdoor Lighting Design

    Directory of Open Access Journals (Sweden)

    Ana Castillo-Martinez

    2017-01-01

    Full Text Available Outdoor lighting is an essential service for modern life. However, the high influence of this type of facility on energy consumption makes it necessary to take extra care in the design phase. Therefore, this manuscript describes an algorithm to help light designers to get, in an easy way, the best configuration parameters and to improve energy efficiency, while ensuring a minimum level of overall uniformity. To make this possible, we used a particle swarm optimization (PSO algorithm. These algorithms are well established, and are simple and effective to solve optimization problems. To take into account the most influential parameters on lighting and energy efficiency, 500 simulations were performed using DIALux software (4.10.0.2, DIAL, Ludenscheid, Germany. Next, the relation between these parameters was studied using to data mining software. Subsequently, we conducted two experiments for setting parameters that enabled the best configuration algorithm in order to improve efficiency in the proposed process optimization.

  5. Design activities of a fusion experimental breeder

    International Nuclear Information System (INIS)

    Huang, J.; Feng, K.; Sheng, G.

    1999-01-01

    The fusion reactor design studies in China are under the support of a fusion-fission hybrid reactor research Program. The purpose of this program is to explore the potential near-term application of fusion energy to support the long-term fusion energy on the one hand and the fission energy development on the other. During 1992-1996 a detailed consistent and integral conceptual design of a Fusion Experimental Breeder, FEB was completed. Beginning from 1996, a further design study towards an Engineering Outline Design of the FEB, FEB-E, has started. The design activities are briefly given. (author)

  6. Design activities of a fusion experimental breeder

    International Nuclear Information System (INIS)

    Huang, J.; Feng, K.; Sheng, G.

    2001-01-01

    The fusion reactor design studies in China are under the support of a fusion-fission hybrid reactor research Program. The purpose of this program is to explore the potential near-term application of fusion energy to support the long-term fusion energy on the one hand and the fission energy development on the other. During 1992-1996 a detailed consistent and integral conceptual design of a Fusion Experimental Breeder, FEB was completed. Beginning from 1996, a further design study towards an Engineering Outline Design of the FEB, FEB-E, has started. The design activities are briefly given. (author)

  7. Phononic band gap structures as optimal designs

    DEFF Research Database (Denmark)

    Jensen, Jakob Søndergaard; Sigmund, Ole

    2003-01-01

    In this paper we use topology optimization to design phononic band gap structures. We consider 2D structures subjected to periodic loading and obtain the distribution of two materials with high contrast in material properties that gives the minimal vibrational response of the structure. Both in...

  8. Strength optimized designs of thermoelastic structures

    DEFF Research Database (Denmark)

    Pedersen, Pauli; Pedersen, Niels Leergaard

    2010-01-01

    For thermoelastic structures the same optimal design does not simultaneously lead to minimum compliance and maximum strength. Compliance may be a questionable objective and focus for the present paper is on the important aspect of strength, quantified as minimization of the maximum von Mises stre...... loads are appended....

  9. Robust Bayesian Experimental Design for Conceptual Model Discrimination

    Science.gov (United States)

    Pham, H. V.; Tsai, F. T. C.

    2015-12-01

    A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.

  10. Optimization and Inverse Design of Pump Impeller

    International Nuclear Information System (INIS)

    Miyauchi, S; Matsumoto, H; Sano, M; Kassai, N; Zhu, B; Luo, X; Piao, B

    2012-01-01

    As for pump impellers, the meridional flow channel and blade-to-blade flow channel, which are relatively independent of each other but greatly affect performance, are designed in parallel. And the optimization design is used for the former and the inverse design is used for the latter. To verify this new design method, a mixed-flow impeller was made. Next, we use Tani's inverse design method for the blade loading of inverse design. It is useful enough to change a deceleration rate freely and greatly. And it can integrally express the rear blade loading of various methods by NACA, Zangeneh and Stratford. We controlled the deceleration rate by shape parameter m, and its value became almost same with Tani's recommended value of the laminar airfoil.

  11. Transportation package design using numerical optimization

    International Nuclear Information System (INIS)

    Harding, D.C.; Witkowski, W.R.

    1993-01-01

    Since the design of transportation packages involves a complex coupling of structural, thermal and radiation shielding analyses and must follow very strict design constraints, numerical optimization provides the potential for more efficient container designs. In numerical optimization, the requirements of the design problem are mathematically formulated through the use of an objective function and constraints. The objective function(s), e.g., package weight, cost, volume, or combination thereof, is the function to be minimized or maximized by altering a set of design variables that define the package's shape and dimensions. Constraints are limitations on the performance of the system, such as resisting structural and thermal accident environments. Two constraints defined for an example wire mesh composite Type B package are: 1) deformation in the containment vessel seal region remains small enough throughout the 10 CFR-71 accident conditions to meet containment criteria, and 2) the elastomeric seal region remains below its operational temperature limit to guarantee seal integrity in the fire environment. The first constraint of a minimum energy absorbing layer thickness is evaluated with finite element analyses of the proposed dynamic crush accident criteria. The second constraint is evaluated with a 1-D transient thermal finite difference code parametrized for variable composite layer thicknesses, and is integrated with the optimization process. (J.P.N.)

  12. Optimal Design of Laminated Composite Beams

    DEFF Research Database (Denmark)

    Blasques, José Pedro Albergaria Amaral

    model for the analysis of laminated composite beams is proposed. The structural analysis is performed in a beam finite element context. The development of a finite element based tool for the analysis of the cross section stiffness properties is described. The resulting beam finite element formulation...... is able to account for the effects of material anisotropy and inhomogeneity in the global response of the beam. Beam finite element models allow for a significant reduction in problem size and are therefore an efficient alternative in computationally intensive applications like optimization frameworks...... design of laminated composite beams. The devised framework is applied in the optimal design of laminated composite beams with different cross section geometries and subjected to different load cases. Design criteria such as beam stiffness, weight, magnitude of the natural frequencies of vibration...

  13. Structural Design Optimization On Thermally Induced Vibration

    International Nuclear Information System (INIS)

    Gu, Yuanxian; Chen, Biaosong; Zhang, Hongwu; Zhao, Guozhong

    2002-01-01

    The numerical method of design optimization for structural thermally induced vibration is originally studied in this paper and implemented in application software JIFEX. The direct and adjoint methods of sensitivity analysis for thermal induced vibration coupled with both linear and nonlinear transient heat conduction is firstly proposed. Based on the finite element method, the structural linear dynamics is treated simultaneously with coupled linear and nonlinear transient heat structural linear dynamics is treated simultaneously with coupled linear and nonlinear transient heat conduction. In the thermal analysis model, the nonlinear heat conduction considered is result from the radiation and temperature-dependent materials. The sensitivity analysis of transient linear and nonlinear heat conduction is performed with the precise time integration method. And then, the sensitivity analysis of structural transient dynamics is performed by the Newmark method. Both the direct method and the adjoint method are employed to derive the sensitivity equations of thermal vibration, and there are two adjoint vectors of structure and heat conduction respectively. The coupling effect of heat conduction on thermal vibration in the sensitivity analysis is particularly investigated. With coupling sensitivity analysis, the optimization model is constructed and solved by the sequential linear programming or sequential quadratic programming algorithm. The methods proposed have been implemented in the application software JIFEX of structural design optimization, and numerical examples are given to illustrate the methods and usage of structural design optimization on thermally induced vibration

  14. Aircraft family design using enhanced collaborative optimization

    Science.gov (United States)

    Roth, Brian Douglas

    Significant progress has been made toward the development of multidisciplinary design optimization (MDO) methods that are well-suited to practical large-scale design problems. However, opportunities exist for further progress. This thesis describes the development of enhanced collaborative optimization (ECO), a new decomposition-based MDO method. To support the development effort, the thesis offers a detailed comparison of two existing MDO methods: collaborative optimization (CO) and analytical target cascading (ATC). This aids in clarifying their function and capabilities, and it provides inspiration for the development of ECO. The ECO method offers several significant contributions. First, it enhances communication between disciplinary design teams while retaining the low-order coupling between them. Second, it provides disciplinary design teams with more authority over the design process. Third, it resolves several troubling computational inefficiencies that are associated with CO. As a result, ECO provides significant computational savings (relative to CO) for the test cases and practical design problems described in this thesis. New aircraft development projects seldom focus on a single set of mission requirements. Rather, a family of aircraft is designed, with each family member tailored to a different set of requirements. This thesis illustrates the application of decomposition-based MDO methods to aircraft family design. This represents a new application area, since MDO methods have traditionally been applied to multidisciplinary problems. ECO offers aircraft family design the same benefits that it affords to multidisciplinary design problems. Namely, it simplifies analysis integration, it provides a means to manage problem complexity, and it enables concurrent design of all family members. In support of aircraft family design, this thesis introduces a new wing structural model with sufficient fidelity to capture the tradeoffs associated with component

  15. Optimization and characterization of liposome formulation by mixture design.

    Science.gov (United States)

    Maherani, Behnoush; Arab-tehrany, Elmira; Kheirolomoom, Azadeh; Reshetov, Vadzim; Stebe, Marie José; Linder, Michel

    2012-02-07

    This study presents the application of the mixture design technique to develop an optimal liposome formulation by using the different lipids in type and percentage (DOPC, POPC and DPPC) in liposome composition. Ten lipid mixtures were generated by the simplex-centroid design technique and liposomes were prepared by the extrusion method. Liposomes were characterized with respect to size, phase transition temperature, ζ-potential, lamellarity, fluidity and efficiency in loading calcein. The results were then applied to estimate the coefficients of mixture design model and to find the optimal lipid composition with improved entrapment efficiency, size, transition temperature, fluidity and ζ-potential of liposomes. The response optimization of experiments was the liposome formulation with DOPC: 46%, POPC: 12% and DPPC: 42%. The optimal liposome formulation had an average diameter of 127.5 nm, a phase-transition temperature of 11.43 °C, a ζ-potential of -7.24 mV, fluidity (1/P)(TMA-DPH)((¬)) value of 2.87 and an encapsulation efficiency of 20.24%. The experimental results of characterization of optimal liposome formulation were in good agreement with those predicted by the mixture design technique.

  16. Optimization and design of an aircraft's morphing wing-tip demonstrator for drag reduction at low speeds, Part II - Experimental validation using Infra-Red transition measurement from Wind Tunnel tests

    Directory of Open Access Journals (Sweden)

    Andreea Koreanschi

    2017-02-01

    Full Text Available In the present paper, an ‘in-house’ genetic algorithm was numerically and experimentally validated. The genetic algorithm was applied to an optimization problem for improving the aerodynamic performances of an aircraft wing tip through upper surface morphing. The optimization was performed for 16 flight cases expressed in terms of various combinations of speeds, angles of attack and aileron deflections. The displacements resulted from the optimization were used during the wind tunnel tests of the wing tip demonstrator for the actuators control to change the upper surface shape of the wing. The results of the optimization of the flow behavior for the airfoil morphing upper-surface problem were validated with wind tunnel experimental transition results obtained with infra-red Thermography on the wing-tip demonstrator. The validation proved that the 2D numerical optimization using the ‘in-house’ genetic algorithm was an appropriate tool in improving various aspects of a wing’s aerodynamic performances.

  17. High-efficiency design optimization of a centrifugal pump

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Man Woong; Ma, Sang Bum; Shim, Hyeon Seok; Kim, Kwang Yong [Dept. of Mechanical Engineering, Inha University, Incheon (Korea, Republic of)

    2016-09-15

    Design optimization of a backward-curved blades centrifugal pump with specific speed of 150 has been performed to improve hydraulic performance of the pump using surrogate modeling and three-dimensional steady Reynolds-averaged Navier-Stokes analysis. The shear stress transport model was used for the analysis of turbulence. Four geometric variables defining the blade hub inlet angle, hub contours, blade outlet angle, and blade angle profile of impeller were selected as design variables, and total efficiency of the pump at design flow rate was set as the objective function for the optimization. Thirty-six design points were chosen using the Latin hypercube sampling, and three different surrogate models were constructed using the objective function values calculated at these design points. The optimal point was searched from the constructed surrogate model by using sequential quadratic programming. The optimum designs of the centrifugal pump predicted by the surrogate models show considerable increases in efficiency compared to a reference design. Performance of the best optimum design was validated compared to experimental data for total efficiency and head.

  18. Experimental design research approaches, perspectives, applications

    CERN Document Server

    Stanković, Tino; Štorga, Mario

    2016-01-01

    This book presents a new, multidisciplinary perspective on and paradigm for integrative experimental design research. It addresses various perspectives on methods, analysis and overall research approach, and how they can be synthesized to advance understanding of design. It explores the foundations of experimental approaches and their utility in this domain, and brings together analytical approaches to promote an integrated understanding. The book also investigates where these approaches lead to and how they link design research more fully with other disciplines (e.g. psychology, cognition, sociology, computer science, management). Above all, the book emphasizes the integrative nature of design research in terms of the methods, theories, and units of study—from the individual to the organizational level. Although this approach offers many advantages, it has inherently led to a situation in current research practice where methods are diverging and integration between individual, team and organizational under...

  19. Design, Fabrication, and Optimization of Jatropha Sheller

    Directory of Open Access Journals (Sweden)

    Richard P. TING

    2012-07-01

    Full Text Available A study designed, fabricated, and optimized performance of a jatropha sheller, consisting of mainframe, rotary cylinder, stationary cylinder, transmission system. Evaluation and optimization considered moisture content, clearance, and roller speed as independent parameters while the responses comprised of recovery, bulk density factor, shelling capacity, energy utilization of sheller, whole kernel recovery, oil recovery, and energy utilization by extruder.Moisture content failed to affect the response variables. The clearance affected response variables except energy utilization of the extruder. Roller speed affected shelling capacity, whole kernel recovery, and energy utilization of the extruder. Optimization resulted in operating conditions of 9.5%wb moisture content, clearance of 6 mm, and roller speed of 750 rpm.

  20. FPGA fabric specific optimization for RLT design

    International Nuclear Information System (INIS)

    Perwaiz, A.; Khan, S.A.

    2010-01-01

    This paper proposes a technique custom to the optimization requirements suited for a particular family of Field Programmable Gate Arrays (FPGAs). As FPGAs have introduced re configurable black boxes there is a need to perform optimization across FPGAs slice fabric in order to achieve optimum performance. Though the Register Transfer Level (RTL) Hardware Descriptive Language (HDL) code should be technology independent but in many design instances it is imperative to understand the target technology especially once the target device embeds dedicated arithmetic blocks. No matter what the degree of optimization of the algorithm is, the configuration of target device plays an important role as far as the device utilization and path delays are concerned Index Terms: Field Programmable Gate Arrays (FPGA), Compression Tree, Bit Width Reduction, Look Ahead Pipelining. (author)

  1. Design analysis for optimal calibration of diffusivity in reactive multilayers

    KAUST Repository

    Vohra, Manav

    2017-05-29

    Calibration of the uncertain Arrhenius diffusion parameters for quantifying mixing rates in Zr–Al nanolaminate foils have been previously performed in a Bayesian setting [M. Vohra, J. Winokur, K.R. Overdeep, P. Marcello, T.P. Weihs, and O.M. Knio, Development of a reduced model of formation reactions in Zr–Al nanolaminates, J. Appl. Phys. 116(23) (2014): Article No. 233501]. The parameters were inferred in a low-temperature, homogeneous ignition regime, and a high-temperature self-propagating reaction regime. In this work, we extend the analysis to determine optimal experimental designs that would provide the best data for inference. We employ a rigorous framework that quantifies the expected information gain in an experiment, and find the optimal design conditions using Monte Carlo techniques, sparse quadrature, and polynomial chaos surrogates. For the low-temperature regime, we find the optimal foil heating rate and pulse duration, and confirm through simulation that the optimal design indeed leads to sharp posterior distributions of the diffusion parameters. For the high-temperature regime, we demonstrate the potential for increasing the expected information gain concerning the posteriors by increasing the sample size and reducing the uncertainty in measurements. Moreover, posterior marginals are also obtained to verify favourable experimental scenarios.

  2. Theoretical and experimental studies for optimization of PCRV top closures

    International Nuclear Information System (INIS)

    Ottosen, N.S.; Andersen, S.I.

    1975-01-01

    The results from the remaining part of the parameter study and the preparations for the verification of an optimized design are presented. Three models have been made in the same scale and with the same depth to span ratio α as the low LM-3 model from the first investigation, i.e. α=0.35. The model LM-5 was provided with reinforcement in the tensile zone, the upper part of the closure. This reinforcement did not influence the stresses and strains in the load carrying concrete, and the dome failed at the same pressure as in the unreinforced model LM-3. However, the closure did not disintegrate, but failed due to large overall deformations causing seal leakage. In the model LM-6, the inverted dome, which is formed at higher loads as demonstrated in LM-3, was reinforced perpendicular to the supposed middle surface. This reinforcement proved to be effective, giving the dome a higher ultimate load capacity. The LM-6 test stopped due to a circumferential crack in the flange. Finally, the unreinforced LM-7 closure was tested to failure. Apart from minor changes in the flange, LM-7 was identical to LM-3 except for the excavated upper part of the concrete, which in LM-3 formed the heavily cracked tensile zone. The ultimate load and the failure mode observed for this closure were the same as for the LM-3. The experimental results are compared to finite element calculations, in which plasticity and cracking of the concrete are taken into account, and the influence of different material models for the concrete is investigated. A unique failure criterion, which includes failure of the concrete for both tensile and compressive stresses in the same mathematical expression, is proposed. Based on the results obtained from the parameter study, a new closure design is proposed, which is optimized with respect to the requirements at service conditions and ultimate load

  3. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  4. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  5. Robust Optimal Design of Quantum Electronic Devices

    Directory of Open Access Journals (Sweden)

    Ociel Morales

    2018-01-01

    Full Text Available We consider the optimal design of a sequence of quantum barriers, in order to manufacture an electronic device at the nanoscale such that the dependence of its transmission coefficient on the bias voltage is linear. The technique presented here is easily adaptable to other response characteristics. There are two distinguishing features of our approach. First, the transmission coefficient is determined using a semiclassical approximation, so we can explicitly compute the gradient of the objective function. Second, in contrast with earlier treatments, manufacturing uncertainties are incorporated in the model through random variables; the optimal design problem is formulated in a probabilistic setting and then solved using a stochastic collocation method. As a measure of robustness, a weighted sum of the expectation and the variance of a least-squares performance metric is considered. Several simulations illustrate the proposed technique, which shows an improvement in accuracy over 69% with respect to brute-force, Monte-Carlo-based methods.

  6. Optimal experiment design for magnetic resonance fingerprinting.

    Science.gov (United States)

    Bo Zhao; Haldar, Justin P; Setsompop, Kawin; Wald, Lawrence L

    2016-08-01

    Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance.

  7. Optimality and Plausibility in Language Design

    Directory of Open Access Journals (Sweden)

    Michael R. Levot

    2016-12-01

    Full Text Available The Minimalist Program in generative syntax has been the subject of much rancour, a good proportion of it stoked by Noam Chomsky’s suggestion that language may represent “a ‘perfect solution’ to minimal design specifications.” A particular flash point has been the application of Minimalist principles to speculations about how language evolved in the human species. This paper argues that Minimalism is well supported as a plausible approach to language evolution. It is claimed that an assumption of minimal design specifications like that employed in MP syntax satisfies three key desiderata of evolutionary and general scientific plausibility: Physical Optimism, Rational Optimism, and Darwin’s Problem. In support of this claim, the methodologies employed in MP to maximise parsimony are characterised through an analysis of recent theories in Minimalist syntax, and those methodologies are defended with reference to practices and arguments from evolutionary biology and other natural sciences.

  8. An Introduction to Experimental Design Research

    DEFF Research Database (Denmark)

    Cash, Philip; Stanković, Tino; Štorga, Mario

    2016-01-01

    Design research brings together influences from the whole gamut of social, psychological, and more technical sciences to create a tradition of empirical study stretching back over 50 years (Horvath 2004; Cross 2007). A growing part of this empirical tradition is experimental, which has gained in ...

  9. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    Science.gov (United States)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  10. OPTIMAL EXPERIMENT DESIGN FOR MAGNETIC RESONANCE FINGERPRINTING

    OpenAIRE

    Zhao, Bo; Haldar, Justin P.; Setsompop, Kawin; Wald, Lawrence L.

    2016-01-01

    Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cram��r-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experi...

  11. Optimization design for SST-1 Tokamak insulators

    International Nuclear Information System (INIS)

    Zhang Yuanbin; Pan Wanjiang

    2012-01-01

    With the help of ANSYS FEA technique, high voltage and cryogenic proper- ties of the SST-1 Tokamak insulators were obtained, and the structure of the insulators was designed and modified by taking into account the simulation results. The simulation results indicate that the optimization structure has better high voltage insulating property and cryogenic mechanics property, and also can fulfill the qualification criteria of the SST-1 Tokamak insulators. (authors)

  12. Robust Structured Control Design via LMI Optimization

    DEFF Research Database (Denmark)

    Adegas, Fabiano Daher; Stoustrup, Jakob

    2011-01-01

    This paper presents a new procedure for discrete-time robust structured control design. Parameter-dependent nonconvex conditions for stabilizable and induced L2-norm performance controllers are solved by an iterative linear matrix inequalities (LMI) optimization. A wide class of controller...... structures including decentralized of any order, fixed-order dynamic output feedback, static output feedback can be designed robust to polytopic uncertainties. Stability is proven by a parameter-dependent Lyapunov function. Numerical examples on robust stability margins shows that the proposed procedure can...

  13. Analysis and design optimization of flexible pavement

    Energy Technology Data Exchange (ETDEWEB)

    Mamlouk, M.S.; Zaniewski, J.P.; He, W.

    2000-04-01

    A project-level optimization approach was developed to minimize total pavement cost within an analysis period. Using this approach, the designer is able to select the optimum initial pavement thickness, overlay thickness, and overlay timing. The model in this approach is capable of predicting both pavement performance and condition in terms of roughness, fatigue cracking, and rutting. The developed model combines the American Association of State Highway and Transportation Officials (AASHTO) design procedure and the mechanistic multilayer elastic solution. The Optimization for Pavement Analysis (OPA) computer program was developed using the prescribed approach. The OPA program incorporates the AASHTO equations, the multilayer elastic system ELSYM5 model, and the nonlinear dynamic programming optimization technique. The program is PC-based and can run in either a Windows 3.1 or a Windows 95 environment. Using the OPA program, a typical pavement section was analyzed under different traffic volumes and material properties. The optimum design strategy that produces the minimum total pavement cost in each case was determined. The initial construction cost, overlay cost, highway user cost, and total pavement cost were also calculated. The methodology developed during this research should lead to more cost-effective pavements for agencies adopting the recommended analysis methods.

  14. Experimental design matters for statistical analysis

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Schaarschmidt, Frank; Onofri, Andrea

    2018-01-01

    , the experimental design is often more or less neglected when analyzing data. Two data examples were analyzed using different modelling strategies: Firstly, in a randomized complete block design, mean heights of maize treated with a herbicide and one of several adjuvants were compared. Secondly, translocation...... of an insecticide applied to maize as a seed treatment was evaluated using incomplete data from an unbalanced design with several layers of hierarchical sampling. Extensive simulations were carried out to further substantiate the effects of different modelling strategies. RESULTS: It was shown that results from sub...

  15. Discrete optimization of isolator locations for vibration isolation systems: An analytical and experimental investigation

    Energy Technology Data Exchange (ETDEWEB)

    Ponslet, E.R.; Eldred, M.S. [Sandia National Labs., Albuquerque, NM (United States). Structural Dynamics Dept.

    1996-05-17

    An analytical and experimental study is conducted to investigate the effect of isolator locations on the effectiveness of vibration isolation systems. The study uses isolators with fixed properties and evaluates potential improvements to the isolation system that can be achieved by optimizing isolator locations. Because the available locations for the isolators are discrete in this application, a Genetic Algorithm (GA) is used as the optimization method. The system is modeled in MATLAB{trademark} and coupled with the GA available in the DAKOTA optimization toolkit under development at Sandia National Laboratories. Design constraints dictated by hardware and experimental limitations are implemented through penalty function techniques. A series of GA runs reveal difficulties in the search on this heavily constrained, multimodal, discrete problem. However, the GA runs provide a variety of optimized designs with predicted performance from 30 to 70 times better than a baseline configuration. An alternate approach is also tested on this problem: it uses continuous optimization, followed by rounding of the solution to neighboring discrete configurations. Results show that this approach leads to either infeasible or poor designs. Finally, a number of optimized designs obtained from the GA searches are tested in the laboratory and compared to the baseline design. These experimental results show a 7 to 46 times improvement in vibration isolation from the baseline configuration.

  16. Design and optimization of tidal turbine airfoil

    Energy Technology Data Exchange (ETDEWEB)

    Grasso, F. [ECN Wind Energy, Petten (Netherlands)

    2011-07-15

    In order to increase the ratio of energy capture to the loading and thereby to reduce cost of energy, the use of specially tailored airfoils is needed. This work is focused on the design of an airfoil for marine application. Firstly, the requirements for this class of airfoils are illustrated and discussed with reference to the requirements for wind turbine airfoils. Then, the design approach is presented. This is a numerical optimization scheme in which a gradient based algorithm is used, coupled with RFOIL solver and a composite Bezier geometrical parameterization. A particularly sensitive point is the choice and implementation of constraints; in order to formalize in the most complete and effective way the design requirements, the effects of activating specific constraints are discussed. Particularly importance is given to the cavitation phenomenon. Finally, a numerical example regarding the design of a high efficiency, tidal turbine airfoil is illustrated and the results are compared with existing turbine airfoils.

  17. Design optimization of condenser microphone: a design of experiment perspective.

    Science.gov (United States)

    Tan, Chee Wee; Miao, Jianmin

    2009-06-01

    A well-designed condenser microphone backplate is very important in the attainment of good frequency response characteristics--high sensitivity and wide bandwidth with flat response--and low mechanical-thermal noise. To study the design optimization of the backplate, a 2(6) factorial design with a single replicate, which consists of six backplate parameters and four responses, has been undertaken on a comprehensive condenser microphone model developed by Zuckerwar. Through the elimination of insignificant parameters via normal probability plots of the effect estimates, the projection of an unreplicated factorial design into a replicated one can be performed to carry out an analysis of variance on the factorial design. The air gap and slot have significant effects on the sensitivity, mechanical-thermal noise, and bandwidth while the slot/hole location interaction has major influence over the latter two responses. An organized and systematic approach of designing the backplate is summarized.

  18. Computational and experimental optimization of the exhaust air energy recovery wind turbine generator

    International Nuclear Information System (INIS)

    Tabatabaeikia, Seyedsaeed; Ghazali, Nik Nazri Bin Nik; Chong, Wen Tong; Shahizare, Behzad; Izadyar, Nima; Esmaeilzadeh, Alireza; Fazlizan, Ahmad

    2016-01-01

    Highlights: • Studying the viability of harvesting wasted energy by exhaust air recovery generator. • Optimizing the design using response surface methodology. • Validation of optimization and computation result by performing experimental tests. • Investigation of flow behaviour using computational fluid dynamic simulations. • Performing the technical and economic study of the exhaust air recovery generator. - Abstract: This paper studies the optimization of an innovative exhaust air recovery wind turbine generator through computational fluid dynamic (CFD) simulations. The optimization strategy aims to optimize the overall system energy generation and simultaneously guarantee that it does not violate the cooling tower performance in terms of decreasing airflow intake and increasing fan motor power consumption. The wind turbine rotor position, modifying diffuser plates, and introducing separator plates to the design are considered as the variable factors for the optimization. The generated power coefficient is selected as optimization objective. Unlike most of previous optimizations in field of wind turbines, in this study, response surface methodology (RSM) as a method of analytical procedures optimization has been utilised by using multivariate statistic techniques. A comprehensive study on CFD parameters including the mesh resolution, the turbulence model and transient time step values is presented. The system is simulated using SST K-ω turbulence model and then both computational and optimization results are validated by experimental data obtained in laboratory. Results show that the optimization strategy can improve the wind turbine generated power by 48.6% compared to baseline design. Meanwhile, it is able to enhance the fan intake airflow rate and decrease fan motor power consumption. The obtained optimization equations are also validated by both CFD and experimental results and a negligible deviation in range of 6–8.5% is observed.

  19. Optimization of the National Ignition Facility primary shield design

    International Nuclear Information System (INIS)

    Annese, C.E.; Watkins, E.F.; Greenspan, E.; Miller, W.F.

    1993-10-01

    Minimum cost design concepts of the primary shield for the National Ignition laser fusion experimental Facility (NIF) are searched with the help of the optimization code SWAN. The computational method developed for this search involves incorporating the time dependence of the delayed photon field within effective delayed photon production cross sections. This method enables one to address the time-dependent problem using relatively simple, time-independent transport calculations, thus significantly simplifying the design process. A novel approach was used for the identification of the optimal combination of constituents that will minimize the shield cost; it involves the generation, with SWAN, of effectiveness functions for replacing materials on an equal cost basis. The minimum cost shield design concept was found to consist of a mixture of polyethylene and low cost, low activation materials such as SiC, with boron added near the shield boundaries

  20. Pareto Optimal Design for Synthetic Biology.

    Science.gov (United States)

    Patanè, Andrea; Santoro, Andrea; Costanza, Jole; Carapezza, Giovanni; Nicosia, Giuseppe

    2015-08-01

    Recent advances in synthetic biology call for robust, flexible and efficient in silico optimization methodologies. We present a Pareto design approach for the bi-level optimization problem associated to the overproduction of specific metabolites in Escherichia coli. Our method efficiently explores the high dimensional genetic manipulation space, finding a number of trade-offs between synthetic and biological objectives, hence furnishing a deeper biological insight to the addressed problem and important results for industrial purposes. We demonstrate the computational capabilities of our Pareto-oriented approach comparing it with state-of-the-art heuristics in the overproduction problems of i) 1,4-butanediol, ii) myristoyl-CoA, i ii) malonyl-CoA , iv) acetate and v) succinate. We show that our algorithms are able to gracefully adapt and scale to more complex models and more biologically-relevant simulations of the genetic manipulations allowed. The Results obtained for 1,4-butanediol overproduction significantly outperform results previously obtained, in terms of 1,4-butanediol to biomass formation ratio and knock-out costs. In particular overproduction percentage is of +662.7%, from 1.425 mmolh⁻¹gDW⁻¹ (wild type) to 10.869 mmolh⁻¹gDW⁻¹, with a knockout cost of 6. Whereas, Pareto-optimal designs we have found in fatty acid optimizations strictly dominate the ones obtained by the other methodologies, e.g., biomass and myristoyl-CoA exportation improvement of +21.43% (0.17 h⁻¹) and +5.19% (1.62 mmolh⁻¹gDW⁻¹), respectively. Furthermore CPU time required by our heuristic approach is more than halved. Finally we implement pathway oriented sensitivity analysis, epsilon-dominance analysis and robustness analysis to enhance our biological understanding of the problem and to improve the optimization algorithm capabilities.

  1. Ground Vehicle System Integration (GVSI) and Design Optimization Model

    National Research Council Canada - National Science Library

    Horton, William

    1996-01-01

    This report documents the Ground Vehicle System Integration (GVSI) and Design Optimization Model GVSI is a top-level analysis tool designed to support engineering tradeoff studies and vehicle design optimization efforts...

  2. Machine Learning Techniques in Optimal Design

    Science.gov (United States)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution

  3. Superlattice design for optimal thermoelectric generator performance

    Science.gov (United States)

    Priyadarshi, Pankaj; Sharma, Abhishek; Mukherjee, Swarnadip; Muralidharan, Bhaskaran

    2018-05-01

    We consider the design of an optimal superlattice thermoelectric generator via the energy bandpass filter approach. Various configurations of superlattice structures are explored to obtain a bandpass transmission spectrum that approaches the ideal ‘boxcar’ form, which is now well known to manifest the largest efficiency at a given output power in the ballistic limit. Using the coherent non-equilibrium Green’s function formalism coupled self-consistently with the Poisson’s equation, we identify such an ideal structure and also demonstrate that it is almost immune to the deleterious effect of self-consistent charging and device variability. Analyzing various superlattice designs, we conclude that superlattice with a Gaussian distribution of the barrier thickness offers the best thermoelectric efficiency at maximum power. It is observed that the best operating regime of this device design provides a maximum power in the range of 0.32–0.46 MW/m 2 at efficiencies between 54%–43% of Carnot efficiency. We also analyze our device designs with the conventional figure of merit approach to counter support the results so obtained. We note a high zT el   =  6 value in the case of Gaussian distribution of the barrier thickness. With the existing advanced thin-film growth technology, the suggested superlattice structures can be achieved, and such optimized thermoelectric performances can be realized.

  4. Design search and optimization in aerospace engineering.

    Science.gov (United States)

    Keane, A J; Scanlan, J P

    2007-10-15

    In this paper, we take a design-led perspective on the use of computational tools in the aerospace sector. We briefly review the current state-of-the-art in design search and optimization (DSO) as applied to problems from aerospace engineering, focusing on those problems that make heavy use of computational fluid dynamics (CFD). This ranges over issues of representation, optimization problem formulation and computational modelling. We then follow this with a multi-objective, multi-disciplinary example of DSO applied to civil aircraft wing design, an area where this kind of approach is becoming essential for companies to maintain their competitive edge. Our example considers the structure and weight of a transonic civil transport wing, its aerodynamic performance at cruise speed and its manufacturing costs. The goals are low drag and cost while holding weight and structural performance at acceptable levels. The constraints and performance metrics are modelled by a linked series of analysis codes, the most expensive of which is a CFD analysis of the aerodynamics using an Euler code with coupled boundary layer model. Structural strength and weight are assessed using semi-empirical schemes based on typical airframe company practice. Costing is carried out using a newly developed generative approach based on a hierarchical decomposition of the key structural elements of a typical machined and bolted wing-box assembly. To carry out the DSO process in the face of multiple competing goals, a recently developed multi-objective probability of improvement formulation is invoked along with stochastic process response surface models (Krigs). This approach both mitigates the significant run times involved in CFD computation and also provides an elegant way of balancing competing goals while still allowing the deployment of the whole range of single objective optimizers commonly available to design teams.

  5. Design and optimization of tidal turbine airfoil

    Energy Technology Data Exchange (ETDEWEB)

    Grasso, F. [ECN Wind Energy, Petten (Netherlands)

    2012-03-15

    To increase the ratio of energy capture to the loading and, thereby, to reduce cost of energy, the use of specially tailored airfoils is needed. This work is focused on the design of an airfoil for marine application. Firstly, the requirements for this class of airfoils are illustrated and discussed with reference to the requirements for wind turbine airfoils. Then, the design approach is presented. This is a numerical optimization scheme in which a gradient-based algorithm is used, coupled with the RFOIL solver and a composite Bezier geometrical parameterization. A particularly sensitive point is the choice and implementation of constraints .A section of the present work is dedicated to address this point; particular importance is given to the cavitation phenomenon. Finally, a numerical example regarding the design of a high-efficiency hydrofoil is illustrated, and the results are compared with existing turbine airfoils, considering also the effect on turbine performance due to different airfoils.

  6. Optimization of the NIF ignition point design hohlraum

    International Nuclear Information System (INIS)

    Callahan, D A; Hinkel, D E; Berger, R L; Divol, L; Dixit, S N; Edwards, M J; Haan, S W; Jones, O S; Lindl, J D; Meezan, N B; Michel, P A; Pollaine, S M; Suter, L J; Town, R P J; Bradley, P A

    2008-01-01

    In preparation for the start of NIF ignition experiments, we have designed a porfolio of targets that span the temperature range that is consistent with initial NIF operations: 300 eV, 285 eV, and 270 eV. Because these targets are quite complicated, we have developed a plan for choosing the optimum hohlraum for the first ignition attempt that is based on this portfolio of designs coupled with early NIF experiements using 96 beams. These early experiments will measure the laser plasma instabilities of the candidate designs and will demonstrate our ability to tune symmetry in these designs. These experimental results, coupled with the theory and simulations that went into the designs, will allow us to choose the optimal hohlraum for the first NIF ignition attempt

  7. Optimization of the NIF ignition point design hohlraum

    Science.gov (United States)

    Callahan, D. A.; Hinkel, D. E.; Berger, R. L.; Divol, L.; Dixit, S. N.; Edwards, M. J.; Haan, S. W.; Jones, O. S.; Lindl, J. D.; Meezan, N. B.; Michel, P. A.; Pollaine, S. M.; Suter, L. J.; Town, R. P. J.; Bradley, P. A.

    2008-05-01

    In preparation for the start of NIF ignition experiments, we have designed a porfolio of targets that span the temperature range that is consistent with initial NIF operations: 300 eV, 285 eV, and 270 eV. Because these targets are quite complicated, we have developed a plan for choosing the optimum hohlraum for the first ignition attempt that is based on this portfolio of designs coupled with early NIF experiements using 96 beams. These early experiments will measure the laser plasma instabilities of the candidate designs and will demonstrate our ability to tune symmetry in these designs. These experimental results, coupled with the theory and simulations that went into the designs, will allow us to choose the optimal hohlraum for the first NIF ignition attempt.

  8. Conceptual design of Fusion Experimental Reactor (FER)

    International Nuclear Information System (INIS)

    Tone, T.; Fujisawa, N.

    1983-01-01

    Conceptual design studies of the Fusion Experimental Reactor (FER) have been performed. The FER has an objective of achieving selfignition and demonstrating engineering feasibility as a next generation tokamak to JT-60. Various concepts of the FER have been considered. The reference design is based on a double-null divertor. Optional design studies with some attractive features based on advanced concepts such as pumped limiter and RF current drive have been carried out. Key design parameters are; fusion power of 440 MW, average neutron wall loading of 1MW/m 2 , major radius of 5.5m, plasma minor radius of 1.1m, plasma elongation of 1.5, plasma current of 5.3MA, toroidal beta of 4%, toroidal field on plasma axis of 5.7T and tritium breeding ratio of above unity

  9. Conceptual design of fusion experimental reactor (FER)

    International Nuclear Information System (INIS)

    1985-01-01

    The Fusion Experimental Reactor (FER) being developed at JAERI as a next generation tokamak to JT-60 has a major mission of realizing a self-ignited long-burning DT plasma and demonstrating engineering feasibility. During FY82 and FY83 a comprehensive and intensive conceptual design study has been conducted for a pulsed operation FER as a reference option which employs a conventional inductive current drive and a double-null divertor. In parallel with the reference design, studies have been carried out to evaluate advanced reactor concepts such as quasi-steady state operation and steady state operation based on RF current drive and pumped limiter, and comparative studies for single-null divertor/pumped limiter. This report presents major results obtained primarily from FY83 design studies, while the results of FY82 design studies are described in previous references (JAERI-M 83-213--216). (author)

  10. Fundamentals of statistical experimental design and analysis

    CERN Document Server

    Easterling, Robert G

    2015-01-01

    Professionals in all areas - business; government; the physical, life, and social sciences; engineering; medicine, etc. - benefit from using statistical experimental design to better understand their worlds and then use that understanding to improve the products, processes, and programs they are responsible for. This book aims to provide the practitioners of tomorrow with a memorable, easy to read, engaging guide to statistics and experimental design. This book uses examples, drawn from a variety of established texts, and embeds them in a business or scientific context, seasoned with a dash of humor, to emphasize the issues and ideas that led to the experiment and the what-do-we-do-next? steps after the experiment. Graphical data displays are emphasized as means of discovery and communication and formulas are minimized, with a focus on interpreting the results that software produce. The role of subject-matter knowledge, and passion, is also illustrated. The examples do not require specialized knowledge, and t...

  11. Design and experimentation of BSFQ logic devices

    International Nuclear Information System (INIS)

    Hosoki, T.; Kodaka, H.; Kitagawa, M.; Okabe, Y.

    1999-01-01

    Rapid single flux quantum (RSFQ) logic needs synchronous pulses for each gate, so the clock-wiring problem is more serious when designing larger scale circuits with this logic. So we have proposed a new SFQ logic which follows Boolean algebra perfectly by using set and reset pulses. With this logic, the level information of current input is transmitted with these pulses generated by level-to-pulse converters, and each gate calculates logic using its phase level made by these pulses. Therefore, our logic needs no clock in each gate. We called this logic 'Boolean SFQ (BSFQ) logic'. In this paper, we report design and experimentation for an AND gate with inverting input based on BSFQ logic. The experimental results for OR and XOR gates are also reported. (author)

  12. Bioinspiration: applying mechanical design to experimental biology.

    Science.gov (United States)

    Flammang, Brooke E; Porter, Marianne E

    2011-07-01

    The production of bioinspired and biomimetic constructs has fostered much collaboration between biologists and engineers, although the extent of biological accuracy employed in the designs produced has not always been a priority. Even the exact definitions of "bioinspired" and "biomimetic" differ among biologists, engineers, and industrial designers, leading to confusion regarding the level of integration and replication of biological principles and physiology. By any name, biologically-inspired mechanical constructs have become an increasingly important research tool in experimental biology, offering the opportunity to focus research by creating model organisms that can be easily manipulated to fill a desired parameter space of structural and functional repertoires. Innovative researchers with both biological and engineering backgrounds have found ways to use bioinspired models to explore the biomechanics of organisms from all kingdoms to answer a variety of different questions. Bringing together these biologists and engineers will hopefully result in an open discourse of techniques and fruitful collaborations for experimental and industrial endeavors.

  13. Conceptual design of fusion experimental reactor (FER)

    International Nuclear Information System (INIS)

    1984-02-01

    This report describes the engineering conceptual design of Fusion Experimental Reactor (FER) which is to be built as a next generation tokamak machine. This design covers overall reactor systems including MHD equilibrium analysis, mechanical configuration of reactor, divertor, pumped limiter, first wall/breeding blanket/shield, toroidal field magnet, poloidal field magnet, cryostat, electromagnetic analysis, vacuum system, power handling and conversion, NBI, RF heating device, tritium system, neutronics, maintenance, cooling system and layout of facilities. The engineering comparison of a divertor with pumped limiters and safety analysis of reactor systems are also conducted. (author)

  14. Design Optimization of Irregular Cellular Structure for Additive Manufacturing

    Science.gov (United States)

    Song, Guo-Hua; Jing, Shi-Kai; Zhao, Fang-Lei; Wang, Ye-Dong; Xing, Hao; Zhou, Jing-Tao

    2017-09-01

    Irregularcellular structurehas great potential to be considered in light-weight design field. However, the research on optimizing irregular cellular structures has not yet been reporteddue to the difficulties in their modeling technology. Based on the variable density topology optimization theory, an efficient method for optimizing the topology of irregular cellular structures fabricated through additive manufacturing processes is proposed. The proposed method utilizes tangent circles to automatically generate the main outline of irregular cellular structure. The topological layoutof each cellstructure is optimized using the relative density informationobtained from the proposed modified SIMP method. A mapping relationship between cell structure and relative densityelement is builtto determine the diameter of each cell structure. The results show that the irregular cellular structure can be optimized with the proposed method. The results of simulation and experimental test are similar for irregular cellular structure, which indicate that the maximum deformation value obtained using the modified Solid Isotropic Microstructures with Penalization (SIMP) approach is lower 5.4×10-5 mm than that using the SIMP approach under the same under the same external load. The proposed research provides the instruction to design the other irregular cellular structure.

  15. A design approach for integrating thermoelectric devices using topology optimization

    DEFF Research Database (Denmark)

    Soprani, Stefano; Haertel, Jan Hendrik Klaas; Lazarov, Boyan Stefanov

    2016-01-01

    Efficient operation of thermoelectric devices strongly relies on the thermal integration into the energy conversion system in which they operate. Effective thermal integration reduces the temperature differences between the thermoelectric module and its thermal reservoirs, allowing the system...... to operate more efficiently. This work proposes and experimentally demonstrates a topology optimization approach as a design tool for efficient integration of thermoelectric modules into systems with specific design constraints. The approach allows thermal layout optimization of thermoelectric systems...... for different operating conditions and objective functions, such as temperature span, efficiency, and power recoveryrate. As a specific application, the integration of a thermoelectric cooler into the electronics section ofa downhole oil well intervention tool is investigated, with the objective of minimizing...

  16. Experimental validation of a new heterogeneous mechanical test design

    Science.gov (United States)

    Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.

    2018-05-01

    Standard material parameters identification strategies generally use an extensive number of classical tests for collecting the required experimental data. However, a great effort has been made recently by the scientific and industrial communities to support this experimental database on heterogeneous tests. These tests can provide richer information on the material behavior allowing the identification of a more complete set of material parameters. This is a result of the recent development of full-field measurements techniques, like digital image correlation (DIC), that can capture the heterogeneous deformation fields on the specimen surface during the test. Recently, new specimen geometries were designed to enhance the richness of the strain field and capture supplementary strain states. The butterfly specimen is an example of these new geometries, designed through a numerical optimization procedure where an indicator capable of evaluating the heterogeneity and the richness of strain information. However, no experimental validation was yet performed. The aim of this work is to experimentally validate the heterogeneous butterfly mechanical test in the parameter identification framework. For this aim, DIC technique and a Finite Element Model Up-date inverse strategy are used together for the parameter identification of a DC04 steel, as well as the calculation of the indicator. The experimental tests are carried out in a universal testing machine with the ARAMIS measuring system to provide the strain states on the specimen surface. The identification strategy is accomplished with the data obtained from the experimental tests and the results are compared to a reference numerical solution.

  17. Global optimization methods for engineering design

    Science.gov (United States)

    Arora, Jasbir S.

    1990-01-01

    The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.

  18. Global optimization framework for solar building design

    Science.gov (United States)

    Silva, N.; Alves, N.; Pascoal-Faria, P.

    2017-07-01

    The generative modeling paradigm is a shift from static models to flexible models. It describes a modeling process using functions, methods and operators. The result is an algorithmic description of the construction process. Each evaluation of such an algorithm creates a model instance, which depends on its input parameters (width, height, volume, roof angle, orientation, location). These values are normally chosen according to aesthetic aspects and style. In this study, the model's parameters are automatically generated according to an objective function. A generative model can be optimized according to its parameters, in this way, the best solution for a constrained problem is determined. Besides the establishment of an overall framework design, this work consists on the identification of different building shapes and their main parameters, the creation of an algorithmic description for these main shapes and the formulation of the objective function, respecting a building's energy consumption (solar energy, heating and insulation). Additionally, the conception of an optimization pipeline, combining an energy calculation tool with a geometric scripting engine is presented. The methods developed leads to an automated and optimized 3D shape generation for the projected building (based on the desired conditions and according to specific constrains). The approach proposed will help in the construction of real buildings that account for less energy consumption and for a more sustainable world.

  19. Optimal patch code design via device characterization

    Science.gov (United States)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  20. Conceptual design of helium experimental loop

    International Nuclear Information System (INIS)

    Yu Xingfu; Feng Kaiming

    2007-01-01

    In a future demonstration fusion power station (DEMO), helium is envisaged as coolant for plasma facing components, such as blanket and dive,or. All these components have a very complex geometry, with many parallel cooling channels, involving a complex helium flow distribution. Test blanket modules (TBM) of this concept will under go various tests in the experimental reactor ITER. For the qualification of TBM, it is indispensable to test mock-ups in a helium loop under realistic pressure and temperature profiles, in order to validate design codes, especially regarding mass flow and heat transition processes in narrow cooling channels. Similar testing must be performed for DEMO blanket, currently under development. A Helium Experimental Loop (HELOOP) is planed to be built for TBM tests. The design parameter of temperature, pressure, flow rate is 550 degree C, 10 MPa, l kg/s respectively. In particular, HELOOP is able to: perform full-scale tests of TBM under realistic conditions; test other components of the He-cooling system in ITER; qualify the purification circuit; obtain information for the design of the ITER cooling system. The main requirements and characteristics of the HELOOP facility and a preliminary conceptual design are described in the paper. (authors)

  1. Optimal design and experimental validation of a simulated moving bed chromatography for continuous recovery of formic acid in a model mixture of three organic acids from Actinobacillus bacteria fermentation.

    Science.gov (United States)

    Park, Chanhun; Nam, Hee-Geun; Lee, Ki Bong; Mun, Sungyong

    2014-10-24

    The economically-efficient separation of formic acid from acetic acid and succinic acid has been a key issue in the production of formic acid with the Actinobacillus bacteria fermentation. To address this issue, an optimal three-zone simulated moving bed (SMB) chromatography for continuous separation of formic acid from acetic acid and succinic acid was developed in this study. As a first step for this task, the adsorption isotherm and mass-transfer parameters of each organic acid on the qualified adsorbent (Amberchrom-CG300C) were determined through a series of multiple frontal experiments. The determined parameters were then used in optimizing the SMB process for the considered separation. During such optimization, the additional investigation for selecting a proper SMB port configuration, which could be more advantageous for attaining better process performances, was carried out between two possible configurations. It was found that if the properly selected port configuration was adopted in the SMB of interest, the throughout and the formic-acid product concentration could be increased by 82% and 181% respectively. Finally, the optimized SMB process based on the properly selected port configuration was tested experimentally using a self-assembled SMB unit with three zones. The SMB experimental results and the relevant computer simulation verified that the developed process in this study was successful in continuous recovery of formic acid from a ternary organic-acid mixture of interest with high throughput, high purity, high yield, and high product concentration. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Multi-Disciplinary Design Optimization Using WAVE

    Science.gov (United States)

    Irwin, Keith

    2000-01-01

    develop an associative control structure (framework) in the UG WAVE environment enabling multi-disciplinary design of turbine propulsion systems. The capabilities of WAVE were evaluated to assess its use as a rapid optimization and productivity tool. This project also identified future WAVE product enhancements that will make the tool still more beneficial for product development.

  3. Conceptual design of fusion experimental reactor (FER)

    International Nuclear Information System (INIS)

    1984-01-01

    Conceptual Design of Fusion Experimental Reactor (FER) of which the objective will be to realize self-ignition with D-T reaction is reported. Mechanical Configurations of FER are characterized with a noncircular plasma and a double-null divertor. The primary aim of design studies is to demonstrate fissibility of reactor structures as compact and simple as possible with removable torus sectors. The structures of each component such as a first-wall, blanket, shielding, divertor, magnet and so on have been designed. It is also discussed about essential reactor plant system requirements. In addition to the above, a brief concept of a steady-state reactor based on RF current drive is also discussed. The main aim, in this time, is to examine physical studies of a possible RF steady-state reactor. (author)

  4. Conceptual design of fusion experimental reactor (FER)

    International Nuclear Information System (INIS)

    1984-03-01

    A conceptual design study (option C) has been carried out for the fusion experimental reactor (FER). In addition to design of the tokamak reactor and associated systems based on the reference design specifications, feasibility of a water-shield reactor concept was examined as a topical study. The design study for the reference tokamak reactor has produced a reactor concept for the FER, along with major R D items for the concept, based on close examinations on thermal design, electromagnetics, neutronics and remote maintenance. Particular efforts have been directed to the area of electromagnetics. Detailed analyses with close simulation models have been performed on PF coil arrangements and configurations, shell effects of the blanket for plasma position unstability, feedback control, and eddy currents during disruptions. The major design specifications are as follows; Peak fusion power 437 MW Major radius 5.5 m Minor radius 1.1 m Plasma elongation 1.5 Plasma current 5.3 MA Toroidal beta 4 % Field on axis 5.7 T (author)

  5. Design optimization of radiation-hardened CMOS integrated circuits

    International Nuclear Information System (INIS)

    1975-01-01

    Ionizing-radiation-induced threshold voltage shifts in CMOS integrated circuits will drastically degrade circuit performance unless the design parameters related to the fabrication process are properly chosen. To formulate an approach to CMOS design optimization, experimentally observed analytical relationships showing strong dependences between threshold voltage shifts and silicon dioxide thickness are utilized. These measurements were made using radiation-hardened aluminum-gate CMOS inverter circuits and have been corroborated by independent data taken from MOS capacitor structures. Knowledge of these relationships allows one to define ranges of acceptable CMOS design parameters based upon radiation-hardening capabilities and post-irradiation performance specifications. Furthermore, they permit actual design optimization of CMOS integrated circuits which results in optimum pre- and post-irradiation performance with respect to speed, noise margins, and quiescent power consumption. Theoretical and experimental results of these procedures, the applications of which can mean the difference between failure and success of a CMOS integrated circuit in a radiation environment, are presented

  6. Optimal design of a hybridization scheme with a fuel cell using genetic optimization

    Science.gov (United States)

    Rodriguez, Marco A.

    Fuel cell is one of the most dependable "green power" technologies, readily available for immediate application. It enables direct conversion of hydrogen and other gases into electric energy without any pollution of the environment. However, the efficient power generation is strictly stationary process that cannot operate under dynamic environment. Consequently, fuel cell becomes practical only within a specially designed hybridization scheme, capable of power storage and power management functions. The resultant technology could be utilized to its full potential only when both the fuel cell element and the entire hybridization scheme are optimally designed. The design optimization in engineering is among the most complex computational tasks due to its multidimensionality, nonlinearity, discontinuity and presence of constraints in the underlying optimization problem. this research aims at the optimal utilization of the fuel cell technology through the use of genetic optimization, and advance computing. This study implements genetic optimization in the definition of optimum hybridization rules for a PEM fuel cell/supercapacitor power system. PEM fuel cells exhibit high energy density but they are not intended for pulsating power draw applications. They work better in steady state operation and thus, are often hybridized. In a hybrid system, the fuel cell provides power during steady state operation while capacitors or batteries augment the power of the fuel cell during power surges. Capacitors and batteries can also be recharged when the motor is acting as a generator. Making analogies to driving cycles, three hybrid system operating modes are investigated: 'Flat' mode, 'Uphill' mode, and 'Downhill' mode. In the process of discovering the switching rules for these three modes, we also generate a model of a 30W PEM fuel cell. This study also proposes the optimum design of a 30W PEM fuel cell. The PEM fuel cell model and hybridization's switching rules are postulated

  7. CFD based draft tube hydraulic design optimization

    International Nuclear Information System (INIS)

    McNabb, J; Murry, N; Mullins, B F; Devals, C; Kyriacou, S A

    2014-01-01

    The draft tube design of a hydraulic turbine, particularly in low to medium head applications, plays an important role in determining the efficiency and power characteristics of the overall machine, since an important proportion of the available energy, being in kinetic form leaving the runner, needs to be recovered by the draft tube into static head. For large units, these efficiency and power characteristics can equate to large sums of money when considering the anticipated selling price of the energy produced over the machine's life-cycle. This same draft tube design is also a key factor in determining the overall civil costs of the powerhouse, primarily in excavation and concreting, which can amount to similar orders of magnitude as the price of the energy produced. Therefore, there is a need to find the optimum compromise between these two conflicting requirements. In this paper, an elaborate approach is described for dealing with this optimization problem. First, the draft tube's detailed geometry is defined as a function of a comprehensive set of design parameters (about 20 of which a subset is allowed to vary during the optimization process) and are then used in a non-uniform rational B-spline based geometric modeller to fully define the wetted surfaces geometry. Since the performance of the draft tube is largely governed by 3D viscous effects, such as boundary layer separation from the walls and swirling flow characteristics, which in turn governs the portion of the available kinetic energy which will be converted into pressure, a full 3D meshing and Navier-Stokes analysis is performed for each design. What makes this even more challenging is the fact that the inlet velocity distribution to the draft tube is governed by the runner at each of the various operating conditions that are of interest for the exploitation of the powerhouse. In order to determine these inlet conditions, a combined steady-state runner and an initial draft tube analysis

  8. CFD based draft tube hydraulic design optimization

    Science.gov (United States)

    McNabb, J.; Devals, C.; Kyriacou, S. A.; Murry, N.; Mullins, B. F.

    2014-03-01

    The draft tube design of a hydraulic turbine, particularly in low to medium head applications, plays an important role in determining the efficiency and power characteristics of the overall machine, since an important proportion of the available energy, being in kinetic form leaving the runner, needs to be recovered by the draft tube into static head. For large units, these efficiency and power characteristics can equate to large sums of money when considering the anticipated selling price of the energy produced over the machine's life-cycle. This same draft tube design is also a key factor in determining the overall civil costs of the powerhouse, primarily in excavation and concreting, which can amount to similar orders of magnitude as the price of the energy produced. Therefore, there is a need to find the optimum compromise between these two conflicting requirements. In this paper, an elaborate approach is described for dealing with this optimization problem. First, the draft tube's detailed geometry is defined as a function of a comprehensive set of design parameters (about 20 of which a subset is allowed to vary during the optimization process) and are then used in a non-uniform rational B-spline based geometric modeller to fully define the wetted surfaces geometry. Since the performance of the draft tube is largely governed by 3D viscous effects, such as boundary layer separation from the walls and swirling flow characteristics, which in turn governs the portion of the available kinetic energy which will be converted into pressure, a full 3D meshing and Navier-Stokes analysis is performed for each design. What makes this even more challenging is the fact that the inlet velocity distribution to the draft tube is governed by the runner at each of the various operating conditions that are of interest for the exploitation of the powerhouse. In order to determine these inlet conditions, a combined steady-state runner and an initial draft tube analysis, using a

  9. Optimal Design of an Automotive Exhaust Thermoelectric Generator

    Science.gov (United States)

    Fagehi, Hassan; Attar, Alaa; Lee, Hosung

    2018-07-01

    The consumption of energy continues to increase at an exponential rate, especially in terms of conventional automobiles. Approximately 40% of the applied fuel into a vehicle is lost as waste exhausted to the environment. The desire for improved fuel efficiency by recovering the exhaust waste heat in automobiles has become an important subject. A thermoelectric generator (TEG) has the potential to convert exhaust waste heat into electricity as long as it is improving fuel economy. The remarkable amount of research being conducted on TEGs indicates that this technology will have a bright future in terms of power generation. The current study discusses the optimal design of the automotive exhaust TEG. An experimental study has been conducted to verify the model that used the ideal (standard) equations along with effective material properties. The model is reasonably verified by experimental work, mainly due to the utilization of the effective material properties. Hence, the thermoelectric module that was used in the experiment was optimized by using a developed optimal design theory (dimensionless analysis technique).

  10. Optimal Design of an Automotive Exhaust Thermoelectric Generator

    Science.gov (United States)

    Fagehi, Hassan; Attar, Alaa; Lee, Hosung

    2018-04-01

    The consumption of energy continues to increase at an exponential rate, especially in terms of conventional automobiles. Approximately 40% of the applied fuel into a vehicle is lost as waste exhausted to the environment. The desire for improved fuel efficiency by recovering the exhaust waste heat in automobiles has become an important subject. A thermoelectric generator (TEG) has the potential to convert exhaust waste heat into electricity as long as it is improving fuel economy. The remarkable amount of research being conducted on TEGs indicates that this technology will have a bright future in terms of power generation. The current study discusses the optimal design of the automotive exhaust TEG. An experimental study has been conducted to verify the model that used the ideal (standard) equations along with effective material properties. The model is reasonably verified by experimental work, mainly due to the utilization of the effective material properties. Hence, the thermoelectric module that was used in the experiment was optimized by using a developed optimal design theory (dimensionless analysis technique).

  11. A surrogate based multistage-multilevel optimization procedure for multidisciplinary design optimization

    NARCIS (Netherlands)

    Yao, W.; Chen, X.; Ouyang, Q.; Van Tooren, M.

    2011-01-01

    Optimization procedure is one of the key techniques to address the computational and organizational complexities of multidisciplinary design optimization (MDO). Motivated by the idea of synthetically exploiting the advantage of multiple existing optimization procedures and meanwhile complying with

  12. Experimental reversion of the optimal quantum cloning and flipping processes

    International Nuclear Information System (INIS)

    Sciarrino, Fabio; Secondi, Veronica; De Martini, Francesco

    2006-01-01

    The quantum cloner machine maps an unknown arbitrary input qubit into two optimal clones and one optimal flipped qubit. By combining linear and nonlinear optical methods we experimentally implement a scheme that, after the cloning transformation, restores the original input qubit in one of the output channels, by using local measurements, classical communication, and feedforward. This nonlocal method demonstrates how the information on the input qubit can be restored after the cloning process. The realization of the reversion process is expected to find useful applications in the field of modern multipartite quantum cryptography

  13. Sparse linear models: Variational approximate inference and Bayesian experimental design

    International Nuclear Information System (INIS)

    Seeger, Matthias W

    2009-01-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  14. Sparse linear models: Variational approximate inference and Bayesian experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, Matthias W [Saarland University and Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbruecken (Germany)

    2009-12-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  15. Multi-disciplinary design optimization and performance evaluation of a single stage transonic axial compressor

    International Nuclear Information System (INIS)

    Lee, Sae Il; Lee, Dong Ho; Kim, Kyu Hong; Park, Tae Choon; Lim, Byeung Jun; Kang, Young Seok

    2013-01-01

    The multidisciplinary design optimization method, which integrates aerodynamic performance and structural stability, was utilized in the development of a single-stage transonic axial compressor. An approximation model was created using artificial neural network for global optimization within given ranges of variables and several design constraints. The genetic algorithm was used for the exploration of the Pareto front to find the maximum objective function value. The final design was chosen after a second stage gradient-based optimization process to improve the accuracy of the optimization. To validate the design procedure, numerical simulations and compressor tests were carried out to evaluate the aerodynamic performance and safety factor of the optimized compressor. Comparison between numerical optimal results and experimental data are well matched. The optimum shape of the compressor blade is obtained and compared to the baseline design. The proposed optimization framework improves the aerodynamic efficiency and the safety factor.

  16. Optimal design for MRI surface coils

    International Nuclear Information System (INIS)

    Rivera, M.; Vaquero, J.J.; Santos, A.; Pozo, F. del; Ruiz-Cabello, J.

    1997-01-01

    To demonstrate the possibility of designing and constructing specific surface coils or antennae for MRI viewing of each particular tissue producing better results than those provided by a general purpose surface coil. The study was performed by the Bioengineering and Telemedicine Group of Madrid Polytechnical University and was carried out at the Pluridisciplinary Institute of the Universidad Complutense in Madrid, using a BMT-47/40 BIOSPEC resonance unit from Bruker. Surface coils were custom-designed and constructed for each region to be studied, and optimized to make the specimen excitation field as homogeneous as possible, in addition to reducing the brightness artifact. First, images were obtained of a round, water phantom measuring 50 mm in diameter, after which images of laboratory rats and rabbits were obtained. The images thus acquired were compared with the results obtained with the coil provided by the manufacturer of the equipment, and were found to be of better quality, allowing the viewing of deeper tissue for the specimen as well as reducing the brightness artifact. The construction of surface coils for viewing specific tissues or anatomical regions improves image quality. The next step in this ongoing project will be the application of these concepts to units designed for use in humans. (Author) 14 refs

  17. Optimal Ground Source Heat Pump System Design

    Energy Technology Data Exchange (ETDEWEB)

    Ozbek, Metin [Environ Holdings Inc., Princeton, NJ (United States); Yavuzturk, Cy [Univ. of Hartford, West Hartford, CT (United States); Pinder, George [Univ. of Vermont, Burlington, VT (United States)

    2015-04-01

    Despite the facts that GSHPs first gained popularity as early as the 1940’s and they can achieve 30 to 60 percent in energy savings and carbon emission reductions relative to conventional HVAC systems, the use of geothermal energy in the U.S. has been less than 1 percent of the total energy consumption. The key barriers preventing this technically-mature technology from reaching its full commercial potential have been its high installation cost and limited consumer knowledge and trust in GSHP systems to deliver the technology in a cost-effective manner in the market place. Led by ENVIRON, with support from University Hartford and University of Vermont, the team developed and tested a software-based a decision making tool (‘OptGSHP’) for the least-cost design of ground-source heat pump (‘GSHP’) systems. OptGSHP combines state of the art optimization algorithms with GSHP-specific HVAC and groundwater flow and heat transport simulation. The particular strength of OptGSHP is in integrating heat transport due to groundwater flow into the design, which most of the GSHP designs do not get credit for and therefore are overdesigned.

  18. Experimental methods for the analysis of optimization algorithms

    CERN Document Server

    Bartz-Beielstein, Thomas; Paquete, Luis; Preuss, Mike

    2010-01-01

    In operations research and computer science it is common practice to evaluate the performance of optimization algorithms on the basis of computational results, and the experimental approach should follow accepted principles that guarantee the reliability and reproducibility of results. However, computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on diffe

  19. Optimal design of active EMC filters

    Science.gov (United States)

    Chand, B.; Kut, T.; Dickmann, S.

    2013-07-01

    A recent trend in automotive industry is adding electrical drive systems to conventional drives. The electrification allows an expansion of energy sources and provides great opportunities for environmental friendly mobility. The electrical powertrain and its components can also cause disturbances which couple into nearby electronic control units and communication cables. Therefore the communication can be degraded or even permanently disrupted. To minimize these interferences, different approaches are possible. One possibility is to use EMC filters. However, the diversity of filters is very large and the determination of an appropriate filter for each application is time-consuming. Therefore, the filter design is determined by using a simulation tool including an effective optimization algorithm. This method leads to improvements in terms of weight, volume and cost.

  20. Synthesis and design of optimal biorefinery

    DEFF Research Database (Denmark)

    Cheali, Peam

    analysed to enable risk-aware decision making. Theapplication of the developed analysis and decision support toolbox is highlightedthrough relevant biorefinery case studies: bioethanol, biogasoline or biodiesel production; algal biorefinery; and bioethanol-upgrading concepts are presented. This development...... environment. These challenges motivate thedevelopment of sustainable technologies for processing renewable feedstock for the production of fuels, chemicals and materials in what is commonly known as a biorefinery. The biorefinery concept is a term to describe one or more processes whichproduce various...... products from bio-based feedstock. Since there are several bio-basedfeedstock sources, this has motivated development of different conversion concepts producing various desired products. This results in a number of challenges for the synthesis and design of the optimal biorefinery concept at the early...

  1. A statistical approach to optimizing concrete mixture design.

    Science.gov (United States)

    Ahmad, Shamsad; Alghamdi, Saeid A

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  2. A Statistical Approach to Optimizing Concrete Mixture Design

    Directory of Open Access Journals (Sweden)

    Shamsad Ahmad

    2014-01-01

    Full Text Available A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33. A total of 27 concrete mixtures with three replicates (81 specimens were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48, cementitious materials content (350, 375, and 400 kg/m3, and fine/total aggregate ratio (0.35, 0.40, and 0.45. The experimental data were utilized to carry out analysis of variance (ANOVA and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  3. Statistical experimental design for saltstone mixtures

    International Nuclear Information System (INIS)

    Harris, S.P.; Postles, R.L.

    1991-01-01

    We used a mixture experimental design for determining a window of operability for a process at the Savannah River Site Defense Waste Processing Facility (DWPF). The high-level radioactive waste at the Savannah River Site is stored in large underground carbon steel tanks. The waste consists of a supernate layer and a sludge layer. 137 Cs will be removed from the supernate by precipitation and filtration. After further processing, the supernate layer will be fixed as a grout for disposal in concrete vaults. The remaining precipitate will be processed at the DWPF with treated waste tank sludge and glass-making chemicals into borosilicate glass. The leach rate properties of the supernate grout, formed from various mixes of solidified salt waste, needed to be determined. The effective diffusion coefficients for NO 3 and Cr were used as a measure of leach rate. Various mixes of cement, Ca(OH) 2 , salt, slag and flyash were used. These constituents comprise the whole mix. Thus, a mixture experimental design was used

  4. Statistical experimental design for saltstone mixtures

    International Nuclear Information System (INIS)

    Harris, S.P.; Postles, R.L.

    1992-01-01

    The authors used a mixture experimental design for determining a window of operability for a process at the U.S. Department of Energy, Savannah River Site, Defense Waste Processing Facility (DWPF). The high-level radioactive waste at the Savannah River Site is stored in large underground carbon steel tanks. The waste consists of a supernate layer and a sludge layer. Cesium-137 will be removed from the supernate by precipitation and filtration. After further processing, the supernate layer will be fixed as a grout for disposal in concrete vaults. The remaining precipitate will be processed at the DWPF with treated waste tank sludge and glass-making chemicals into borosilicate glass. The leach-rate properties of the supernate grout formed from various mixes of solidified coefficients for NO 3 and chromium were used as a measure of leach rate. Various mixes of cement, Ca(OH) 2 , salt, slag, and fly ash were used. These constituents comprise the whole mix. Thus, a mixture experimental design was used. The regression procedure (PROC REG) in SAS was used to produce analysis of variance (ANOVA) statistics. In addition, detailed model diagnostics are readily available for identifying suspicious observations. For convenience, trillinear contour (TLC) plots, a standard graphics tool for examining mixture response surfaces, of the fitted model were produced using ECHIP

  5. Optimal experiment design for identification of grey-box models

    DEFF Research Database (Denmark)

    Sadegh, Payman; Melgaard, Henrik; Madsen, Henrik

    1994-01-01

    Optimal experiment design is investigated for stochastic dynamic systems where the prior partial information about the system is given as a probability distribution function in the system parameters. The concept of information is related to entropy reduction in the system through Lindley's measur...... estimation results in a considerable reduction of the experimental length. Besides, it is established that the physical knowledge of the system enables us to design experiments, with the goal of maximizing information about the physical parameters of interest.......Optimal experiment design is investigated for stochastic dynamic systems where the prior partial information about the system is given as a probability distribution function in the system parameters. The concept of information is related to entropy reduction in the system through Lindley's measure...... of average information, and the relationship between the choice of information related criteria and some estimators (MAP and MLE) is established. A continuous time physical model of the heat dynamics of a building is considered and the results show that performing an optimal experiment corresponding to a MAP...

  6. Set membership experimental design for biological systems

    Directory of Open Access Journals (Sweden)

    Marvel Skylar W

    2012-03-01

    Full Text Available Abstract Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This

  7. Stress concentrations in keyways and optimization of keyway design

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2010-01-01

    Keys and keyways are one of the most common shaft–hub connections. Despite this fact very little numerical analysis has been reported. The design is often regulated by standards that are almost half a century old, and most results reported in the literature are based on experimental photoelastic...... analysis. The present paper shows how numerical finite element (FE) analysis can improve the prediction of stress concentration in the keyway. Using shape optimization and the simple super elliptical shape, it is shown that the fatigue life of a keyway can be greatly improved with up to a 50 per cent...... reduction in the maximum stress level. The design changes are simple and therefore practical to realize with only two active design parameters....

  8. Optimization of self-microemulsifying drug delivery systems (SMEDDS) using a D-optimal design and the desirability function

    DEFF Research Database (Denmark)

    Holm, R.; Jensen, I.H.M.; Sonnergaard, Jørn

    2006-01-01

    with the hard gelatin capsule. Three formulation variables, PEG200, a surfactant mixture, and an oil mixture, were included in the experimental design. The results of the mathematical analysis of the data demonstrated significant interactions among the formulation variables, and the desirability function......D-optimal design and the desirability function were applied to optimize a self-microemulsifying drug delivery system (SMEDDS). The optimized key parameters were the following: 1) particle size of the dispersed emulsion, 2) solubility of the drug in the vehicle, and 3) the vehicle compatibility...

  9. Research on Multidisciplinary Optimization Design of Bridge Crane

    Directory of Open Access Journals (Sweden)

    Tong Yifei

    2013-01-01

    Full Text Available Bridge crane is one of the most widely used cranes in our country, which is indispensable equipment for material conveying in the modern production. In this paper, the framework of multidisciplinary optimization for bridge crane is proposed. The presented research on crane multidisciplinary design technology for energy saving includes three levels, respectively: metal structures level, transmission design level, and electrical system design level. The shape optimal mathematical model of the crane is established for shape optimization design of metal structure level as well as size optimal mathematical model and topology optimal mathematical model of crane for topology optimization design of metal structure level is established. Finally, system-level multidisciplinary energy-saving optimization design of bridge crane is further carried out with energy-saving transmission design results feedback to energy-saving optimization design of metal structure. The optimization results show that structural optimization design can reduce total mass of crane greatly by using the finite element analysis and multidisciplinary optimization technology premised on the design requirements of cranes such as stiffness and strength; thus, energy-saving design can be achieved.

  10. Chip Design Process Optimization Based on Design Quality Assessment

    Science.gov (United States)

    Häusler, Stefan; Blaschke, Jana; Sebeke, Christian; Rosenstiel, Wolfgang; Hahn, Axel

    2010-06-01

    Nowadays, the managing of product development projects is increasingly challenging. Especially the IC design of ASICs with both analog and digital components (mixed-signal design) is becoming more and more complex, while the time-to-market window narrows at the same time. Still, high quality standards must be fulfilled. Projects and their status are becoming less transparent due to this complexity. This makes the planning and execution of projects rather difficult. Therefore, there is a need for efficient project control. A main challenge is the objective evaluation of the current development status. Are all requirements successfully verified? Are all intermediate goals achieved? Companies often develop special solutions that are not reusable in other projects. This makes the quality measurement process itself less efficient and produces too much overhead. The method proposed in this paper is a contribution to solve these issues. It is applied at a German design house for analog mixed-signal IC design. This paper presents the results of a case study and introduces an optimized project scheduling on the basis of quality assessment results.

  11. Design and optimization of membrane-type acoustic metamaterials

    Science.gov (United States)

    Blevins, Matthew Grant

    One of the most common problems in noise control is the attenuation of low frequency noise. Typical solutions require barriers with high density and/or thickness. Membrane-type acoustic metamaterials are a novel type of engineered material capable of high low-frequency transmission loss despite their small thickness and light weight. These materials are ideally suited to applications with strict size and weight limitations such as aircraft, automobiles, and buildings. The transmission loss profile can be manipulated by changing the micro-level substructure, stacking multiple unit cells, or by creating multi-celled arrays. To date, analysis has focused primarily on experimental studies in plane-wave tubes and numerical modeling using finite element methods. These methods are inefficient when used for applications that require iterative changes to the structure of the material. To facilitate design and optimization of membrane-type acoustic metamaterials, computationally efficient dynamic models based on the impedance-mobility approach are proposed. Models of a single unit cell in a waveguide and in a baffle, a double layer of unit cells in a waveguide, and an array of unit cells in a baffle are studied. The accuracy of the models and the validity of assumptions used are verified using a finite element method. The remarkable computational efficiency of the impedance-mobility models compared to finite element methods enables implementation in design tools based on a graphical user interface and in optimization schemes. Genetic algorithms are used to optimize the unit cell design for a variety of noise reduction goals, including maximizing transmission loss for broadband, narrow-band, and tonal noise sources. The tools for design and optimization created in this work will enable rapid implementation of membrane-type acoustic metamaterials to solve real-world noise control problems.

  12. Particle Swarm Optimization for Structural Design Problems

    Directory of Open Access Journals (Sweden)

    Hamit SARUHAN

    2010-02-01

    Full Text Available The aim of this paper is to employ the Particle Swarm Optimization (PSO technique to a mechanical engineering design problem which is minimizing the volume of a cantilevered beam subject to bending strength constraints. Mechanical engineering design problems are complex activities which are computing capability are more and more required. The most of these problems are solved by conventional mathematical programming techniques that require gradient information. These techniques have several drawbacks from which the main one is becoming trapped in local optima. As an alternative to gradient-based techniques, the PSO does not require the evaluation of gradients of the objective function. The PSO algorithm employs the generation of guided random positions when they search for the global optimum point. The PSO which is a nature inspired heuristics search technique imitates the social behavior of bird flocking. The results obtained by the PSO are compared with Mathematical Programming (MP. It is demonstrated that the PSO performed and obtained better convergence reliability on the global optimum point than the MP. Using the MP, the volume of 2961000 mm3 was obtained while the beam volume of 2945345 mm3 was obtained by the PSO.

  13. Experimental transport phenomena and optimization strategies for thermoelectrics

    Energy Technology Data Exchange (ETDEWEB)

    Ehrlich, A C; Gillespie, D J

    1997-07-01

    When a new and promising thermoelectric material is discovered, an effort is undertaken to improve its figure of merit. If the effort is to be more efficient than one of trial and error with perhaps some rule of thumb guidance then it is important to be able to make the connection between experimental data and the underlying material characteristics, electronic and phononic, that influence the figure of merit. Transport and fermiology experimental data can be used to evaluate these material characteristics and thus establish trends as a function of some controllable parameter, such as composition. In this paper some of the generic-materials characteristics, generally believed to be required for a high figure of merit, will be discussed in terms of the experimental approach to their evaluation and optimization. Transport and fermiology experiments will be emphasized and both will be outlined in what they can reveal and what can be obscured by the simplifying assumptions generally used in their interpretation.

  14. Backbone cup – a structure design competition based on topology optimization and 3D printing

    Directory of Open Access Journals (Sweden)

    Zhu Ji-Hong

    2016-01-01

    Full Text Available This paper addresses a structure design competition based on topology optimization and 3D Printing, and proposes an experimental approach to efficiently and quickly measure the mechanical performance of the structures designed using topology optimization. Since the topology optimized structure designs are prone to be geometrically complex, it is extremely inconvenient to fabricate these designs with traditional machining. In this study, we not only fabricated the topology optimized structure designs using one kind of 3D Printing technology known as stereolithography (SLA, but also tested the mechanical performance of the produced prototype parts. The finite element method is used to analyze the structure responses, and the consistent results of the numerical simulations and structure experiments prove the validity of this new structure testing approach. This new approach will not only provide a rapid access to topology optimized structure designs verifying, but also cut the turnaround time of structure design significantly.

  15. Flow cytometry: design, development and experimental validation

    International Nuclear Information System (INIS)

    Seigneur, Alain

    1987-01-01

    The flow cytometry techniques allow the analysis and sorting of living biologic cells at rates above five to ten thousand events per second. After a short review, we present in this report the design and development of a 'high-tech' apparatus intended for research laboratories and the experimental results. The first part deals with the physical principles allowing morphologic and functional analysis of cells or cellular components. The measured parameters are as follows: electrical resistance pulse sizing, light scattering and fluorescence. Hydrodynamic centering is used, and in the same way, the division of a water-stream into droplets leading to electrostatic sorting of particles. The second part deals with the apparatus designed by the 'Commissariat a l'Energie Atomique' (C.E.A.) and industrialised by 'ODAM' (ATC 3000). The last part of this thesis work is the performance evaluations of this cyto-meter. The difference between the two size measurement methods are analyzed: electrical resistance pulse sizing versus small-angle light scattering. By an original optics design, high sensitivity has been reached in the fluorescence measurement: the equivalent noise corresponds to six hundred fluorescein isothiocyanate (FITC) molecules. The sorting performances have also been analyzed and the cell viability proven. (author) [fr

  16. Design of JT-60SA magnets and associated experimental validations

    International Nuclear Information System (INIS)

    Zani, L.; Barabaschi, P.; Peyrot, M.; Meunier, L.; Tomarchio, V.; Duglue, D.; Decool, P.; Torre, A.; Marechal, J.L.; Della Corte, A.; Di Zenobio, A.; Muzzi, L.; Cucchiaro, A.; Turtu, S.; Ishida, S.; Yoshida, K.; Tsuchiya, K.; Kizu, K.; Murakami, H.

    2011-01-01

    In the framework of the JT-60SA project, aiming at upgrading the present JT-60U tokamak toward a fully superconducting configuration, the detailed design phase led to adopt for the three main magnet systems a brand new design. Europe (EU) is expected to provide to Japan (JA) the totality of the toroidal field (TF) magnet system, while JA will provide both Equilibrium field (EF) and Central Solenoid (CS) systems. All magnet designs were optimized trough the past years and entered in parallel into extensive experimentally-based phases of concept validation, which came to maturation in the years 2009 and 2010. For this, all magnet systems were investigated by mean of dedicated samples, e.g. conductor and joint samples designed, manufactured and tested at full scale in ad hoc facilities either in EU or in JA. The present paper, after an overall description of magnet systems layouts, presents in a general approach the different experimental campaigns dedicated to qualification design and manufacture processes of either coils, conductors and electrical joints. The main results with the associated analyses are shown and the main conclusions presented, especially regarding their contribution to consolidate the triggering of magnet mass production. The status of respective manufacturing stages in EU and in JA are also evoked. (authors)

  17. How to optimize hydrogen plant designs

    Energy Technology Data Exchange (ETDEWEB)

    van Weenen, W F; Tielrooy, J

    1983-01-01

    In a typical hydrogen plant of the type which will be discussed, methane or higher hydrocarbons are reformed with steam in a steam hydrocarbon reformer operating at a pressure of 250 to 400 psig, a temperature of 1500 to 1600/sup 0/F, and with a ratio of steam to carbon in the feed of about 3.0. Following the reformer and cooling, there is a single stage of high temperature carbon monoxide shift conversion. Optionally, after further cooling, this may be followed by a second stage of carbon monoxide shift conversion operating at a lower temperature to obtain a more favourable equilibrium; this is called low temperature shift conversion. After cooling to ambient temperature, and separation of the condensate, the gas is passed through a Pressure Swing Adsorption (PSA)l unit which removes all the impurities along with a small amount of hydrogen. The waste gas from the PSA unit containing all the impurities is used as fuel to the reformer. Heat is recovered from the reformer flue gas, reformer product, high temperature shift converter product and low temperature shift converter product. This paper discusses some of the process variables and design variables which must be considered in arriving at an optimized design. Seven different flow schemes are discussed in the light of the objectives they are designed for. The seven schemes and their objectives are: Flow Scheme 1 - lowest first cost; moderate efficiency, Flow Scheme 2 - high efficiency, low cost; Flow Scheme 3 - low feed plus fuel, moderately high efficiency; Flow Scheme 4 - lowest feed plus fuel; Flow Scheme 5 - lowest feed, low fuel; Flow Scheme 6 -lowest feed, highest efficiency; and Flow Scheme 7 - lowest feed plus fuel, export electric power instead of export electric power instead of export steam. 15 figures, 1 table.

  18. Design and optimization of a brachytherapy robot

    Science.gov (United States)

    Meltsner, Michael A.

    Trans-rectal ultrasound guided (TRUS) low dose rate (LDR) interstitial brachytherapy has become a popular procedure for the treatment of prostate cancer, the most common type of non-skin cancer among men. The current TRUS technique of LDR implantation may result in less than ideal coverage of the tumor with increased risk of negative response such as rectal toxicity and urinary retention. This technique is limited by the skill of the physician performing the implant, the accuracy of needle localization, and the inherent weaknesses of the procedure itself. The treatment may require 100 or more sources and 25 needles, compounding the inaccuracy of the needle localization procedure. A robot designed for prostate brachytherapy may increase the accuracy of needle placement while minimizing the effect of physician technique in the TRUS procedure. Furthermore, a robot may improve associated toxicities by utilizing angled insertions and freeing implantations from constraints applied by the 0.5 cm-spaced template used in the TRUS method. Within our group, Lin et al. have designed a new type of LDR source. The "directional" source is a seed designed to be partially shielded. Thus, a directional, or anisotropic, source does not emit radiation in all directions. The source can be oriented to irradiate cancerous tissues while sparing normal ones. This type of source necessitates a new, highly accurate method for localization in 6 degrees of freedom. A robot is the best way to accomplish this task accurately. The following presentation of work describes the invention and optimization of a new prostate brachytherapy robot that fulfills these goals. Furthermore, some research has been dedicated to the use of the robot to perform needle insertion tasks (brachytherapy, biopsy, RF ablation, etc.) in nearly any other soft tissue in the body. This can be accomplished with the robot combined with automatic, magnetic tracking.

  19. A Review of Design Optimization Methods for Electrical Machines

    Directory of Open Access Journals (Sweden)

    Gang Lei

    2017-11-01

    Full Text Available Electrical machines are the hearts of many appliances, industrial equipment and systems. In the context of global sustainability, they must fulfill various requirements, not only physically and technologically but also environmentally. Therefore, their design optimization process becomes more and more complex as more engineering disciplines/domains and constraints are involved, such as electromagnetics, structural mechanics and heat transfer. This paper aims to present a review of the design optimization methods for electrical machines, including design analysis methods and models, optimization models, algorithms and methods/strategies. Several efficient optimization methods/strategies are highlighted with comments, including surrogate-model based and multi-level optimization methods. In addition, two promising and challenging topics in both academic and industrial communities are discussed, and two novel optimization methods are introduced for advanced design optimization of electrical machines. First, a system-level design optimization method is introduced for the development of advanced electric drive systems. Second, a robust design optimization method based on the design for six-sigma technique is introduced for high-quality manufacturing of electrical machines in production. Meanwhile, a proposal is presented for the development of a robust design optimization service based on industrial big data and cloud computing services. Finally, five future directions are proposed, including smart design optimization method for future intelligent design and production of electrical machines.

  20. Automated magnetic divertor design for optimal power exhaust

    Energy Technology Data Exchange (ETDEWEB)

    Blommaert, Maarten

    2017-07-01

    The so-called divertor is the standard particle and power exhaust system of nuclear fusion tokamaks. In essence, the magnetic configuration hereby 'diverts' the plasma to a specific divertor structure. The design of this divertor is still a key issue to be resolved to evolve from experimental fusion tokamaks to commercial power plants. The focus of this dissertation is on one particular design requirement: avoiding excessive heat loads on the divertor structure. The divertor design process is assisted by plasma edge transport codes that simulate the plasma and neutral particle transport in the edge of the reactor. These codes are computationally extremely demanding, not in the least due to the complex collisional processes between plasma and neutrals that lead to strong radiation sinks and macroscopic heat convection near the vessel walls. One way of improving the heat exhaust is by modifying the magnetic confinement that governs the plasma flow. In this dissertation, automated design of the magnetic configuration is pursued using adjoint based optimization methods. A simple and fast perturbation model is used to compute the magnetic field in the vacuum vessel. A stable optimal design method of the nested type is then elaborated that strictly accounts for several nonlinear design constraints and code limitations. Using appropriate cost function definitions, the heat is spread more uniformly over the high-heat load plasma-facing components in a practical design example. Furthermore, practical in-parts adjoint sensitivity calculations are presented that provide a way to an efficient optimization procedure. Results are elaborated for a fictituous JET (Joint European Torus) case. The heat load is strongly reduced by exploiting an expansion of the magnetic flux towards the solid divertor structure. Subsequently, shortcomings of the perturbation model for magnetic field calculations are discussed in comparison to a free boundary equilibrium (FBE) simulation

  1. Automated magnetic divertor design for optimal power exhaust

    International Nuclear Information System (INIS)

    Blommaert, Maarten

    2017-01-01

    The so-called divertor is the standard particle and power exhaust system of nuclear fusion tokamaks. In essence, the magnetic configuration hereby 'diverts' the plasma to a specific divertor structure. The design of this divertor is still a key issue to be resolved to evolve from experimental fusion tokamaks to commercial power plants. The focus of this dissertation is on one particular design requirement: avoiding excessive heat loads on the divertor structure. The divertor design process is assisted by plasma edge transport codes that simulate the plasma and neutral particle transport in the edge of the reactor. These codes are computationally extremely demanding, not in the least due to the complex collisional processes between plasma and neutrals that lead to strong radiation sinks and macroscopic heat convection near the vessel walls. One way of improving the heat exhaust is by modifying the magnetic confinement that governs the plasma flow. In this dissertation, automated design of the magnetic configuration is pursued using adjoint based optimization methods. A simple and fast perturbation model is used to compute the magnetic field in the vacuum vessel. A stable optimal design method of the nested type is then elaborated that strictly accounts for several nonlinear design constraints and code limitations. Using appropriate cost function definitions, the heat is spread more uniformly over the high-heat load plasma-facing components in a practical design example. Furthermore, practical in-parts adjoint sensitivity calculations are presented that provide a way to an efficient optimization procedure. Results are elaborated for a fictituous JET (Joint European Torus) case. The heat load is strongly reduced by exploiting an expansion of the magnetic flux towards the solid divertor structure. Subsequently, shortcomings of the perturbation model for magnetic field calculations are discussed in comparison to a free boundary equilibrium (FBE) simulation. These flaws

  2. Space tourism optimized reusable spaceplane design

    Energy Technology Data Exchange (ETDEWEB)

    Penn, J.P.; Lindley, C.A. [The Aerospace Corporation El Segundo, California90245-4691 (United States)

    1997-01-01

    Market surveys suggest that a viable space tourism industry will require flight rates about two orders of magnitude higher than those required for conventional spacelift. Although enabling round-trip cost goals for a viable space tourism business are about {dollar_sign}240 per pound ({dollar_sign}529/kg), or {dollar_sign}72,000 per passenger round-trip, goals should be about {dollar_sign}50 per pound ({dollar_sign}110/kg) or approximately {dollar_sign}15,000 for a typical passenger and baggage. The lower price will probably open space tourism to the general population. Vehicle reliabilities must approach those of commercial aircraft as closely as possible. This paper addresses the development of spaceplanes optimized for the ultra-high flight rate and high reliability demands of the space tourism mission. It addresses the fundamental operability, reliability, and cost drivers needed to satisfy this mission need. Figures of merit similar to those used to evaluate the economic viability of conventional commercial aircraft are developed, including items such as payload/vehicle dry weight, turnaround time, propellant cost per passenger, and insurance and depreciation costs, which show that infrastructure can be developed for a viable space tourism industry. A reference spaceplane design optimized for space tourism is described. Subsystem allocations for reliability, operability, and costs are made and a route to developing such a capability is discussed. The vehicle{close_quote}s ability to also satisfy the traditional spacelift market is shown. {copyright} {ital 1997 American Institute of Physics.}

  3. Numerical simulation and optimized design of cased telescoped ammunition interior ballistic

    Directory of Open Access Journals (Sweden)

    Jia-gang Wang

    2018-04-01

    Full Text Available In order to achieve the optimized design of a cased telescoped ammunition (CTA interior ballistic design, a genetic algorithm was introduced into the optimal design of CTA interior ballistics with coupling the CTA interior ballistic model. Aiming at the interior ballistic characteristics of a CTA gun, the goal of CTA interior ballistic design is to obtain a projectile velocity as large as possible. The optimal design of CTA interior ballistic is carried out using a genetic algorithm by setting peak pressure, changing the chamber volume and gun powder charge density. A numerical simulation of interior ballistics based on a 35 mm CTA firing experimental scheme was conducted and then the genetic algorithm was used for numerical optimization. The projectile muzzle velocity of the optimized scheme is increased from 1168 m/s for the initial experimental scheme to 1182 m/s. Then four optimization schemes were obtained with several independent optimization processes. The schemes were compared with each other and the difference between these schemes is small. The peak pressure and muzzle velocity of these schemes are almost the same. The result shows that the genetic algorithm is effective in the optimal design of the CTA interior ballistics. This work will be lay the foundation for further CTA interior ballistic design. Keywords: Cased telescoped ammunition, Interior ballistics, Gunpowder, Optimization genetic algorithm

  4. Application of factorial designs and Doehlert matrix in optimization of experimental variables associated with the preconcentration and determination of vanadium and copper in seawater by inductively coupled plasma optical emission spectrometry

    International Nuclear Information System (INIS)

    Ferreira, Sergio L.C.; Queiroz, Adriana S.; Fernandes, Marcelo S.; Santos, Hilda C. dos

    2002-01-01

    In the present paper a procedure for preconcentration and determination of vanadium and copper in seawater using inductively coupled plasma optical emission spectrometry (ICP OES) is proposed, which is based on solid-phase extraction of vanadium (IV), vanadium (V) and copper (II) ions as 1-(2-pyridylazo)-2-naphthol (PAN) complexes by active carbon. The optimization process was carried out using two-level full factorials and Doehlert matrix designs. Four variables (PAN mass, pH, active carbon mass and shaking time) were regarded as factors in the optimization. Results of the two-level full factorial design 2 4 with 16 runs for vanadium extraction, based on the variance analysis (ANOVA), demonstrated that the factors pH and active carbon mass, besides the interaction (pHxactive carbon mass), are statistically significant. For copper, the ANOVA revealed that the factors PAN mass, pH and active carbon mass and the interactions (PAN massxpH) and (pHxactive carbon mass) are statistically significant. Doehlert designs were applied in order to determine the optimum conditions for extraction. The procedure proposed allowed the determination of vanadium and copper with detection limits (3σ/S) of 73 and 94 ng l -1 , respectively. The precision, calculated as relative standard deviation (R.S.D.), was 1.22 and 1.37% for 12.50 μg l -1 of vanadium and copper, respectively. The preconcentration factor was 80. The recovery achieved for determination of vanadium and copper in the presence of several cations demonstrated that this procedure improved the selectivity required for seawater analysis. The procedure was applied to the determination of vanadium and copper in seawater samples collected in Salvador City, Brazil. Results showed good agreement with other data reported in the literature

  5. Optimization of minoxidil microemulsions using fractional factorial design approach.

    Science.gov (United States)

    Jaipakdee, Napaphak; Limpongsa, Ekapol; Pongjanyakul, Thaned

    2016-01-01

    The objective of this study was to apply fractional factorial and multi-response optimization designs using desirability function approach for developing topical microemulsions. Minoxidil (MX) was used as a model drug. Limonene was used as an oil phase. Based on solubility, Tween 20 and caprylocaproyl polyoxyl-8 glycerides were selected as surfactants, propylene glycol and ethanol were selected as co-solvent in aqueous phase. Experiments were performed according to a two-level fractional factorial design to evaluate the effects of independent variables: Tween 20 concentration in surfactant system (X1), surfactant concentration (X2), ethanol concentration in co-solvent system (X3), limonene concentration (X4) on MX solubility (Y1), permeation flux (Y2), lag time (Y3), deposition (Y4) of MX microemulsions. It was found that Y1 increased with increasing X3 and decreasing X2, X4; whereas Y2 increased with decreasing X1, X2 and increasing X3. While Y3 was not affected by these variables, Y4 increased with decreasing X1, X2. Three regression equations were obtained and calculated for predicted values of responses Y1, Y2 and Y4. The predicted values matched experimental values reasonably well with high determination coefficient. By using optimal desirability function, optimized microemulsion demonstrating the highest MX solubility, permeation flux and skin deposition was confirmed as low level of X1, X2 and X4 but high level of X3.

  6. A surrogate based multistage-multilevel optimization procedure for multidisciplinary design optimization

    OpenAIRE

    Yao, W.; Chen, X.; Ouyang, Q.; Van Tooren, M.

    2011-01-01

    Optimization procedure is one of the key techniques to address the computational and organizational complexities of multidisciplinary design optimization (MDO). Motivated by the idea of synthetically exploiting the advantage of multiple existing optimization procedures and meanwhile complying with the general process of satellite system design optimization in conceptual design phase, a multistage-multilevel MDO procedure is proposed in this paper by integrating multiple-discipline-feasible (M...

  7. Optimal design of tests for heat exchanger fouling identification

    International Nuclear Information System (INIS)

    Palmer, Kyle A.; Hale, William T.; Such, Kyle D.; Shea, Brian R.; Bollas, George M.

    2016-01-01

    Highlights: • Built-in test design that optimizes the information extractable from the said test. • Method minimizes the covariance of a fault with system uncertainty. • Method applied for the identification and quantification of heat exchanger fouling. • Heat exchanger fouling is identifiable despite the uncertainty in inputs and states. - Graphical Abstract: - Abstract: Particulate fouling in plate fin heat exchangers of aircraft environmental control systems is a recurring issue in environments rich in foreign object debris. Heat exchanger fouling detection, in terms of quantification of its severity, is critical for aircraft maintenance scheduling and safe operation. In this work, we focus on methods for offline fouling detection during aircraft ground handling, where the allowable variability range of admissible inputs is wider. We explore methods of optimal experimental design to estimate heat exchanger inputs and input trajectories that maximize the identifiability of fouling. In particular, we present a methodology in which D-optimality is used as a criterion for statistically significant inference of heat exchanger fouling in uncertain environments. The optimal tests are designed on the basis of a heat exchanger model of the inherent mass, energy and momentum balances, validated against literature data. The model is then used to infer sensitivities of the heat exchanger outputs with respect to fouling metrics and maximize them by manipulating input trajectories; thus enhancing the accuracy in quantifying the fouling extent. The proposed methodology is evaluated with statistical indices of the confidence in estimating thermal fouling resistance at uncertain operating conditions, explored in a series of case studies.

  8. Application of surrogate-based global optimization to aerodynamic design

    CERN Document Server

    Pérez, Esther

    2016-01-01

    Aerodynamic design, like many other engineering applications, is increasingly relying on computational power. The growing need for multi-disciplinarity and high fidelity in design optimization for industrial applications requires a huge number of repeated simulations in order to find an optimal design candidate. The main drawback is that each simulation can be computationally expensive – this becomes an even bigger issue when used within parametric studies, automated search or optimization loops, which typically may require thousands of analysis evaluations. The core issue of a design-optimization problem is the search process involved. However, when facing complex problems, the high-dimensionality of the design space and the high-multi-modality of the target functions cannot be tackled with standard techniques. In recent years, global optimization using meta-models has been widely applied to design exploration in order to rapidly investigate the design space and find sub-optimal solutions. Indeed, surrogat...

  9. Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.

    Science.gov (United States)

    Otero-Muras, Irene; Banga, Julio R

    2017-07-21

    In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.

  10. The role of experimental typography in designing logotypes

    OpenAIRE

    Pogačnik, Tadeja

    2014-01-01

    Designing logotypes is an important part of graphic design. Great logotypes are designed using custom made typefaces. Therefore, it is very important, especially for the typographic designer, to have practical experience and be up to date with all trends in the field of experimental typefaces design, also called experimental typography. In my thesis statement, I carefully examined the problems of experimental typography - which allows more creative and free typography designing for different ...

  11. Crashworthiness design optimization using multipoint sequential linear programming

    NARCIS (Netherlands)

    Etman, L.F.P.; Adriaens, J.M.T.A.; Slagmaat, van M.T.P.; Schoofs, A.J.G.

    1996-01-01

    A design optimization tool has been developed for the crash victim simulation software MADYMO. The crash worthiness optimization problem is characterized by a noisy behaviour of objective function and constraints. Additionally, objective function and constraint values follow from a computationally

  12. Design Optimization of Space Launch Vehicles Using a Genetic Algorithm

    National Research Council Canada - National Science Library

    Bayley, Douglas J

    2007-01-01

    .... A genetic algorithm (GA) was employed to optimize the design of the space launch vehicle. A cost model was incorporated into the optimization process with the goal of minimizing the overall vehicle cost...

  13. A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments

    KAUST Repository

    Harman, Radoslav

    2018-01-17

    We propose a class of subspace ascent methods for computing optimal approximate designs that covers both existing as well as new and more efficient algorithms. Within this class of methods, we construct a simple, randomized exchange algorithm (REX). Numerical comparisons suggest that the performance of REX is comparable or superior to the performance of state-of-the-art methods across a broad range of problem structures and sizes. We focus on the most commonly used criterion of D-optimality that also has applications beyond experimental design, such as the construction of the minimum volume ellipsoid containing a given set of data-points. For D-optimality, we prove that the proposed algorithm converges to the optimum. We also provide formulas for the optimal exchange of weights in the case of the criterion of A-optimality. These formulas enable one to use REX for computing A-optimal and I-optimal designs.

  14. A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments

    KAUST Repository

    Harman, Radoslav; Filová , Lenka; Richtarik, Peter

    2018-01-01

    We propose a class of subspace ascent methods for computing optimal approximate designs that covers both existing as well as new and more efficient algorithms. Within this class of methods, we construct a simple, randomized exchange algorithm (REX). Numerical comparisons suggest that the performance of REX is comparable or superior to the performance of state-of-the-art methods across a broad range of problem structures and sizes. We focus on the most commonly used criterion of D-optimality that also has applications beyond experimental design, such as the construction of the minimum volume ellipsoid containing a given set of data-points. For D-optimality, we prove that the proposed algorithm converges to the optimum. We also provide formulas for the optimal exchange of weights in the case of the criterion of A-optimality. These formulas enable one to use REX for computing A-optimal and I-optimal designs.

  15. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    Science.gov (United States)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  16. Design of Passive Acoustic Wave Shaping Devices and Their Experimental Validation

    DEFF Research Database (Denmark)

    Christiansen, Rasmus Ellebæk; Sigmund, Ole; Fernandez Grande, Efren

    We discuss a topology optimization based approach for designing passive acoustic wave shaping devices and demonstrate its application to; directional sound emission [1], sound focusing and wave splitting. Optimized devices, numerical and experimental results are presented and benchmarked against...... other designs proposed in the literature. We focus on design problems where the size of the device is on the order of the wavelength, a problematic region for traditional design methods, such as ray tracing.The acoustic optimization problem is formulated in the frequency domain and modeled...

  17. Review of design optimization methods for turbomachinery aerodynamics

    Science.gov (United States)

    Li, Zhihui; Zheng, Xinqian

    2017-08-01

    In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced optimization strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design optimization to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for optimizing turbomachinery designs. (1) optimization methods, (2) stochastic optimization combined with blade parameterization methods and the design of experiment methods, (3) gradient-based optimization methods for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future optimization of turbomachinery designs.

  18. Optimal Learning for Efficient Experimentation in Nanotechnology and Biochemistry

    Science.gov (United States)

    2015-12-22

    AFRL-AFOSR-VA-TR-2016-0018 Optimal Learning for Efficient Experimentation in Nanotechnology, Biochemistry Warren Powell TRUSTEES OF PRINCETON... Biochemistry 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-12-1-0200 5c.  PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S) Warren Powell 5d.  PROJECT NUMBER 5e...scientists. 15. SUBJECT TERMS Biochemistry 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF 19a.  NAME OF RESPONSIBLE PERSON Warren

  19. Chemical optimization algorithm for fuzzy controller design

    CERN Document Server

    Astudillo, Leslie; Castillo, Oscar

    2014-01-01

    In this book, a novel optimization method inspired by a paradigm from nature is introduced. The chemical reactions are used as a paradigm to propose an optimization method that simulates these natural processes. The proposed algorithm is described in detail and then a set of typical complex benchmark functions is used to evaluate the performance of the algorithm. Simulation results show that the proposed optimization algorithm can outperform other methods in a set of benchmark functions. This chemical reaction optimization paradigm is also applied to solve the tracking problem for the dynamic model of a unicycle mobile robot by integrating a kinematic and a torque controller based on fuzzy logic theory. Computer simulations are presented confirming that this optimization paradigm is able to outperform other optimization techniques applied to this particular robot application

  20. Configurable intelligent optimization algorithm design and practice in manufacturing

    CERN Document Server

    Tao, Fei; Laili, Yuanjun

    2014-01-01

    Presenting the concept and design and implementation of configurable intelligent optimization algorithms in manufacturing systems, this book provides a new configuration method to optimize manufacturing processes. It provides a comprehensive elaboration of basic intelligent optimization algorithms, and demonstrates how their improvement, hybridization and parallelization can be applied to manufacturing. Furthermore, various applications of these intelligent optimization algorithms are exemplified in detail, chapter by chapter. The intelligent optimization algorithm is not just a single algorit

  1. Integrated topology and shape optimization in structural design

    Science.gov (United States)

    Bremicker, M.; Chirehdast, M.; Kikuchi, N.; Papalambros, P. Y.

    1990-01-01

    Structural optimization procedures usually start from a given design topology and vary its proportions or boundary shapes to achieve optimality under various constraints. Two different categories of structural optimization are distinguished in the literature, namely sizing and shape optimization. A major restriction in both cases is that the design topology is considered fixed and given. Questions concerning the general layout of a design (such as whether a truss or a solid structure should be used) as well as more detailed topology features (e.g., the number and connectivities of bars in a truss or the number of holes in a solid) have to be resolved by design experience before formulating the structural optimization model. Design quality of an optimized structure still depends strongly on engineering intuition. This article presents a novel approach for initiating formal structural optimization at an earlier stage, where the design topology is rigorously generated in addition to selecting shape and size dimensions. A three-phase design process is discussed: an optimal initial topology is created by a homogenization method as a gray level image, which is then transformed to a realizable design using computer vision techniques; this design is then parameterized and treated in detail by sizing and shape optimization. A fully automated process is described for trusses. Optimization of two dimensional solid structures is also discussed. Several application-oriented examples illustrate the usefulness of the proposed methodology.

  2. Optimal Smooth Consumption and Annuity Design

    DEFF Research Database (Denmark)

    Bruhn, Kenneth; Steffensen, Mogens

    2013-01-01

    We propose an optimization criterion that yields extraordinary consumption smoothing compared to the well known results of the life-cycle model. Under this criterion we solve the related consumption and investment optimization problem faced by individuals with preferences for intertemporal stabil...... stability in consumption. We find that the consumption and investment patterns demanded under the optimization criterion is in general offered as annuity benefits from products in the class of ‘Formula Based Smoothed Investment-Linked Annuities’....

  3. Optimization Design and Application of Underground Reinforced Concrete Bifurcation Pipe

    Directory of Open Access Journals (Sweden)

    Chao Su

    2015-01-01

    Full Text Available Underground reinforced concrete bifurcation pipe is an important part of conveyance structure. During construction, the workload of excavation and concrete pouring can be significantly decreased according to optimized pipe structure, and the engineering quality can be improved. This paper presents an optimization mathematical model of underground reinforced concrete bifurcation pipe structure according to real working status of several common pipe structures from real cases. Then, an optimization design system was developed based on Particle Swarm Optimization algorithm. Furthermore, take the bifurcation pipe of one hydropower station as an example: optimization analysis was conducted, and accuracy and stability of the optimization design system were verified successfully.

  4. Application of colony complex algorithm to nuclear component optimization design

    International Nuclear Information System (INIS)

    Yan Changqi; Li Guijing; Wang Jianjun

    2014-01-01

    Complex algorithm (CA) has got popular application to the region of nuclear engineering. In connection with the specific features of the application of traditional complex algorithm (TCA) to the optimization design in engineering structures, an improved method, colony complex algorithm (CCA), was developed based on the optimal combination of many complexes, in which the disadvantages of TCA were overcame. The optimized results of benchmark function show that CCA has better optimizing performance than TCA. CCA was applied to the high-pressure heater optimization design, and the optimization effect is obvious. (authors)

  5. Optimal trajectories for flexible-link manipulator slewing using recursive quadratic programming: Experimental verification

    International Nuclear Information System (INIS)

    Parker, G.G.; Eisler, G.R.; Feddema, J.T.

    1994-01-01

    Procedures for trajectory planning and control of flexible link robots are becoming increasingly important to satisfy performance requirements of hazardous waste removal efforts. It has been shown that utilizing link flexibility in designing open loop joint commands can result in improved performance as opposed to damping vibration throughout a trajectory. The efficient use of link compliance is exploited in this work. Specifically, experimental verification of minimum time, straight line tracking using a two-link planar flexible robot is presented. A numerical optimization process, using an experimentally verified modal model, is used for obtaining minimum time joint torque and angle histories. The optimal joint states are used as commands to the proportional-derivative servo actuated joints. These commands are precompensated for the nonnegligible joint servo actuator dynamics. Using the precompensated joint commands, the optimal joint angles are tracked with such fidelity that the tip tracking error is less than 2.5 cm

  6. Optimal Design of a Center Support Quadruple Mass Gyroscope (CSQMG

    Directory of Open Access Journals (Sweden)

    Tian Zhang

    2016-04-01

    Full Text Available This paper reports a more complete description of the design process of the Center Support Quadruple Mass Gyroscope (CSQMG, a gyro expected to provide breakthrough performance for flat structures. The operation of the CSQMG is based on four lumped masses in a circumferential symmetric distribution, oscillating in anti-phase motion, and providing differential signal extraction. With its 4-fold symmetrical axes pattern, the CSQMG achieves a similar operation mode to Hemispherical Resonant Gyroscopes (HRGs. Compared to the conventional flat design, four Y-shaped coupling beams are used in this new pattern in order to adjust mode distribution and enhance the synchronization mechanism of operation modes. For the purpose of obtaining the optimal design of the CSQMG, a kind of applicative optimization flow is developed with a comprehensive derivation of the operation mode coordination, the pseudo mode inhibition, and the lumped mass twisting motion elimination. The experimental characterization of the CSQMG was performed at room temperature, and the center operation frequency is 6.8 kHz after tuning. Experiments show an Allan variance stability 0.12°/h (@100 s and a white noise level about 0.72°/h/√Hz, which means that the CSQMG possesses great potential to achieve navigation grade performance.

  7. On simultaneous shape and orientational design for eigenfrequency optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2007-01-01

    Plates with an internal hole of fixed area are designed in order to maximize the performance with respect to eigenfrequencies. The optimization is performed by simultaneous shape, material, and orientational design. The shape of the hole is designed, and the material design is the design of an or......Plates with an internal hole of fixed area are designed in order to maximize the performance with respect to eigenfrequencies. The optimization is performed by simultaneous shape, material, and orientational design. The shape of the hole is designed, and the material design is the design...... of an orthotropic material that can be considered as a fiber-net within each finite element. This fiber-net is optimally oriented in the individual elements of the finite element discretization. The optimizations are performed using the finite element method for analysis, and the optimization approach is a two......-step method. In the first step, we find the best design on the basis of a recursive optimization procedure based on optimality criteria. In the second step, mathematical programming and sensitivity analysis are applied to find the final optimized design....

  8. Software for CATV Design and Frequency Plan Optimization

    OpenAIRE

    Hala, O.

    2007-01-01

    The paper deals with the structure of a software medium used for design and sub-optimization of frequency plan in CATV networks, their description and design method. The software performance is described and a simple design example of energy balance of a simplified CATV network is given. The software was created in programming environment called Delphi and local optimization was made in Matlab.

  9. Advances in metaheuristic algorithms for optimal design of structures

    CERN Document Server

    Kaveh, A

    2017-01-01

    This book presents efficient metaheuristic algorithms for optimal design of structures. Many of these algorithms are developed by the author and his colleagues, consisting of Democratic Particle Swarm Optimization, Charged System Search, Magnetic Charged System Search, Field of Forces Optimization, Dolphin Echolocation Optimization, Colliding Bodies Optimization, Ray Optimization. These are presented together with algorithms which were developed by other authors and have been successfully applied to various optimization problems. These consist of Particle Swarm Optimization, Big Bang-Big Crunch Algorithm, Cuckoo Search Optimization, Imperialist Competitive Algorithm, and Chaos Embedded Metaheuristic Algorithms. Finally a multi-objective optimization method is presented to solve large-scale structural problems based on the Charged System Search algorithm. The concepts and algorithms presented in this book are not only applicable to optimization of skeletal structures and finite element models, but can equally ...

  10. Advances in metaheuristic algorithms for optimal design of structures

    CERN Document Server

    Kaveh, A

    2014-01-01

    This book presents efficient metaheuristic algorithms for optimal design of structures. Many of these algorithms are developed by the author and his colleagues, consisting of Democratic Particle Swarm Optimization, Charged System Search, Magnetic Charged System Search, Field of Forces Optimization, Dolphin Echolocation Optimization, Colliding Bodies Optimization, Ray Optimization. These are presented together with algorithms which were developed by other authors and have been successfully applied to various optimization problems. These consist of Particle Swarm Optimization, Big Bang-Big Crunch Algorithm, Cuckoo Search Optimization, Imperialist Competitive Algorithm, and Chaos Embedded Metaheuristic Algorithms. Finally a multi-objective optimization method is presented to solve large-scale structural problems based on the Charged System Search algorithm. The concepts and algorithms presented in this book are not only applicable to optimization of skeletal structures and finite element models, but can equally ...

  11. Optimization design of blade shapes for wind turbines

    DEFF Research Database (Denmark)

    Chen, Jin; Wang, Xudong; Shen, Wen Zhong

    2010-01-01

    For the optimization design of wind turbines, the new normal and tangential induced factors of wind turbines are given considering the tip loss of the normal and tangential forces based on the blade element momentum theory and traditional aerodynamic model. The cost model of the wind turbines...... and the optimization design model are developed. In the optimization model, the objective is the minimum cost of energy and the design variables are the chord length, twist angle and the relative thickness. Finally, the optimization is carried out for a 2 MW blade by using this optimization design model....... The performance of blades is validated through the comparison and analysis of the results. The reduced cost shows that the optimization model is good enough for the design of wind turbines. The results give a proof for the design and research on the blades of large scale wind turbines and also establish...

  12. Experimental Methods for the Analysis of Optimization Algorithms

    DEFF Research Database (Denmark)

    of solution quality, runtime and other measures; and the third part collects advanced methods from experimental design for configuring and tuning algorithms on a specific class of instances with the goal of using the least amount of experimentation. The contributor list includes leading scientists......, computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on different...

  13. Optimal color design of psychological counseling room by design of experiments and response surface methodology.

    Science.gov (United States)

    Liu, Wenjuan; Ji, Jianlin; Chen, Hua; Ye, Chenyu

    2014-01-01

    Color is one of the most powerful aspects of a psychological counseling environment. Little scientific research has been conducted on color design and much of the existing literature is based on observational studies. Using design of experiments and response surface methodology, this paper proposes an optimal color design approach for transforming patients' perception into color elements. Six indices, pleasant-unpleasant, interesting-uninteresting, exciting-boring, relaxing-distressing, safe-fearful, and active-inactive, were used to assess patients' impression. A total of 75 patients participated, including 42 for Experiment 1 and 33 for Experiment 2. 27 representative color samples were designed in Experiment 1, and the color sample (L = 75, a = 0, b = -60) was the most preferred one. In Experiment 2, this color sample was set as the 'central point', and three color attributes were optimized to maximize the patients' satisfaction. The experimental results show that the proposed method can get the optimal solution for color design of a counseling room.

  14. An optimal design problem in wave propagation

    DEFF Research Database (Denmark)

    Bellido, J.C.; Donoso, Alberto

    2007-01-01

    of finding the best distributions of the two initial materials in a rod in order to minimize the vibration energy in the structure under periodic loading of driving frequency Omega. We comment on relaxation and optimality conditions, and perform numerical simulations of the optimal configurations. We prove...... also the existence of classical solutions in certain cases....

  15. A highly selective sorbent for removal of Cr(VI) from aqueous solutions based on Fe₃O₄/poly(methyl methacrylate) grafted Tragacanth gum nanocomposite: optimization by experimental design.

    Science.gov (United States)

    Sadeghi, Susan; Rad, Fatemeh Alavi; Moghaddam, Ali Zeraatkar

    2014-12-01

    In this work, poly(methyl methacrylate) grafted Tragacanth gum modified Fe3O4 magnetic nanoparticles (P(MMA)-g-TG-MNs) were developed for the selective removal of Cr(VI) species from aqueous solutions in the presence of Cr(III). The sorbent was characterized by Fourier transform infrared (FTIR) spectroscopy, transmission electron microscopy (TEM), a vibrating sample magnetometer (VSM), and thermo-gravimetric analysis (TGA). A screening study on operational variables was performed using a two-level full factorial design. Based on the analysis of variance (ANOVA) with 95% confidence limit, the significant variables were found. The central composite design (CCD) has also been employed for statistical modeling and analysis of the effects and interactions of significant variables dealing with the Cr(VI) uptake process by the developed sorbent. The predicted optimal conditions were situated at a pH of 5.5, contact time of 3.4 h, and 3.0 g L(-1) dose. The Langmuir, Freundlich, and Temkin isotherm models were used to describe the equilibrium sorption of Cr(VI) by the absorbent, and the Langmuir isotherm showed the best concordance as an equilibrium model. The adsorption process was followed by a pseudo-second-order kinetic model. Thermodynamic investigations showed that the biosorption process was spontaneous and exothermic. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Manifold Regularized Experimental Design for Active Learning.

    Science.gov (United States)

    Zhang, Lining; Shum, Hubert P H; Shao, Ling

    2016-12-02

    Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.

  17. PARAMETER COORDINATION AND ROBUST OPTIMIZATION FOR MULTIDISCIPLINARY DESIGN

    Institute of Scientific and Technical Information of China (English)

    HU Jie; PENG Yinghong; XIONG Guangleng

    2006-01-01

    A new parameter coordination and robust optimization approach for multidisciplinary design is presented. Firstly, the constraints network model is established to support engineering change, coordination and optimization. In this model, interval boxes are adopted to describe the uncertainty of design parameters quantitatively to enhance the design robustness. Secondly, the parameter coordination method is presented to solve the constraints network model, monitor the potential conflicts due to engineering changes, and obtain the consistency solution space corresponding to the given product specifications. Finally, the robust parameter optimization model is established, and genetic arithmetic is used to obtain the robust optimization parameter. An example of bogie design is analyzed to show the scheme to be effective.

  18. Optimal design of water supply networks for enhancing seismic reliability

    International Nuclear Information System (INIS)

    Yoo, Do Guen; Kang, Doosun; Kim, Joong Hoon

    2016-01-01

    The goal of the present study is to construct a reliability evaluation model of a water supply system taking seismic hazards and present techniques to enhance hydraulic reliability of the design into consideration. To maximize seismic reliability with limited budgets, an optimal design model is developed using an optimization technique called harmony search (HS). The model is applied to actual water supply systems to determine pipe diameters that can maximize seismic reliability. The reliabilities between the optimal design and existing designs were compared and analyzed. The optimal design would both enhance reliability by approximately 8.9% and have a construction cost of approximately 1.3% less than current pipe construction cost. In addition, the reinforcement of the durability of individual pipes without considering the system produced ineffective results in terms of both cost and reliability. Therefore, to increase the supply ability of the entire system, optimized pipe diameter combinations should be derived. Systems in which normal status hydraulic stability and abnormal status available demand could be maximally secured if configured through the optimal design. - Highlights: • We construct a seismic reliability evaluation model of water supply system. • We present technique to enhance hydraulic reliability in the aspect of design. • Harmony search algorithm is applied in optimal designs process. • The effects of the proposed optimal design are improved reliability about by 9%. • Optimized pipe diameter combinations should be derived indispensably.

  19. Saponification of Jatropha curcas Seed Oil: Optimization by D-Optimal Design

    Directory of Open Access Journals (Sweden)

    Jumat Salimon

    2012-01-01

    Full Text Available In this study, the effects of ethanolic KOH concentration, reaction temperature, and reaction time to free fatty acid (FFA percentage were investigated. D-optimal design was employed to study significance of these factors and optimum condition for the technique predicted and evaluated. The optimum conditions for maximum FFA% were achieved when 1.75 M ethanolic KOH concentration was used as the catalyst, reaction temperature of 65°C, and reaction time of 2.0 h. This study showed that ethanolic KOH concentration was significant variable for saponification of J. curcas seed oil. In an 18-point experimental design, percentage of FFA for saponification of J. curcas seed oil can be raised from 1.89% to 102.2%.

  20. The Potential Role of Cache Mechanism for Complicated Design Optimization

    International Nuclear Information System (INIS)

    Noriyasu, Hirokawa; Fujita, Kikuo

    2002-01-01

    This paper discusses the potential role of cache mechanism for complicated design optimization While design optimization is an application of mathematical programming techniques to engineering design problems over numerical computation, its progress has been coevolutionary. The trend in such progress indicates that more complicated applications become the next target of design optimization beyond growth of computational resources. As the progress in the past two decades had required response surface techniques, decomposition techniques, etc., any new framework must be introduced for the future of design optimization methods. This paper proposes a possibility of what we call cache mechanism for mediating the coming challenge and briefly demonstrates some promises in the idea of Voronoi diagram based cumulative approximation as an example of its implementation, development of strict robust design, extension of design optimization for product variety

  1. POBE: A Computer Program for Optimal Design of Multi-Subject Blocked fMRI Experiments

    Directory of Open Access Journals (Sweden)

    Bärbel Maus

    2014-01-01

    Full Text Available For functional magnetic resonance imaging (fMRI studies, researchers can use multi-subject blocked designs to identify active brain regions for a certain stimulus type of interest. Before performing such an experiment, careful planning is necessary to obtain efficient stimulus effect estimators within the available financial resources. The optimal number of subjects and the optimal scanning time for a multi-subject blocked design with fixed experimental costs can be determined using optimal design methods. In this paper, the user-friendly computer program POBE 1.2 (program for optimal design of blocked experiments, version 1.2 is presented. POBE provides a graphical user interface for fMRI researchers to easily and efficiently design their experiments. The computer program POBE calculates the optimal number of subjects and the optimal scanning time for user specified experimental factors and model parameters so that the statistical efficiency is maximised for a given study budget. POBE can also be used to determine the minimum budget for a given power. Furthermore, a maximin design can be determined as efficient design for a possible range of values for the unknown model parameters. In this paper, the computer program is described and illustrated with typical experimental factors for a blocked fMRI experiment.

  2. Space mapping optimization algorithms for engineering design

    DEFF Research Database (Denmark)

    Koziel, Slawomir; Bandler, John W.; Madsen, Kaj

    2006-01-01

    A simple, efficient optimization algorithm based on space mapping (SM) is presented. It utilizes input SM to reduce the misalignment between the coarse and fine models of the optimized object over a region of interest, and output space mapping (OSM) to ensure matching of response and first...... to a benchmark problem. In comparison with SMIS, the models presented are simple and have a small number of parameters that need to be extracted. The new algorithm is applied to the optimization of coupled-line band-pass filter....

  3. Optimization of experimental conditions in uranium trace determination using laser time-resolved fluorimetry

    International Nuclear Information System (INIS)

    Baly, L.; Garcia, M.A.

    1996-01-01

    At the present paper a new sample excitation geometry is presented for the uranium trace determination in aqueous solutions by the Time-Resolved Laser-Induced Fluorescence. This new design introduces the laser radiation through the top side of the cell allowing the use of cells with two quartz sides, less expensive than commonly used at this experimental set. Optimization of the excitation conditions, temporal discrimination and spectral selection are presented

  4. Conceptual design study of fusion experimental reactor (FY86 FER)

    International Nuclear Information System (INIS)

    Kobayashi, Takeshi; Yamada, Masao; Mizoguchi, Tadanori

    1987-09-01

    This report describes the results of the reactor configuration/structure design for the fusion experimental reactor (FER) performed in FY 1986. The design was intended to meet the physical and engineering mission of the next step device which was decided by the subcommittee on the next step device of the nuclear fusion council. The objectives of the design study in FY 1986 are to advance and optimize the design concept of the last year because the recommendation of the subcommittee was basically the same as the design philosophy of the last year. Six candidate reactor configurations which correspond to options C ∼ D presented by the subcommittee were extensively examined. Consequently, ACS reactor (Advanced Option-C with Single Null Divertor) was selected as the reference configuration from viewpoints of technical risks and cost performance. Regarding the reactor structure, the following items were investigated intensively: minimization of reactor size, protection of first wall against plasma disruption, simplification of shield structure, reactor configuration which enables optimum arrangement of poloidal field coils. (author)

  5. GA BASED GLOBAL OPTIMAL DESIGN PARAMETERS FOR ...

    African Journals Online (AJOL)

    Journal of Modeling, Design and Management of Engineering Systems ... DESIGN PARAMETERS FOR CONSECUTIVE REACTIONS IN SERIALLY CONNECTED ... for the process equipments such as chemical reactors used in industries.

  6. Model-based Organization Manning, Strategy, and Structure Design via Team Optimal Design (TOD) Methodology

    National Research Council Canada - National Science Library

    Levchuk, Georgiy; Chopra, Kari; Paley, Michael; Levchuk, Yuri; Clark, David

    2005-01-01

    This paper describes a quantitative Team Optimal Design (TOD) methodology and its application to the design of optimized manning for E-10 Multi-sensor Command and Control Aircraft. The E-10 (USAF, 2002...

  7. Environmental indicators for industrial optimization and design

    NARCIS (Netherlands)

    Konneman, Bram

    2008-01-01

    Companies use standard financial indicators to determine their business success and optimize their business opportunities. However, sustainable development demands for an integrated approach to economic, environmental and social indicators. Although a lot of indicator initiatives are under

  8. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  9. On CAD-integrated Structural Topology and Design Optimization

    DEFF Research Database (Denmark)

    Olhoff, Niels; Bendsøe, M.P.; Rasmussen, John

    1991-01-01

    Concepts underlying an interactive CAD-based engineering design optimization system are developed, and methods of optimizing the topology, shape and sizing of mechanical components are presented. These methods are integrated in the system, and the method for determining the optimal topology is used...

  10. Data Science and Optimal Learning for Material Discovery and Design

    Science.gov (United States)

    ; Optimal Learning for Material Discovery & Design Data Science and Optimal Learning for Material inference and optimization methods that can constrain predictions using insights and results from theory directions in the application of information theoretic tools to materials problems related to learning from

  11. Global stability-based design optimization of truss structures using ...

    Indian Academy of Sciences (India)

    Furthermore, a pure pareto-ranking based multi-objective optimization model is employed for the design optimization of the truss structure with multiple objectives. The computational performance of the optimization model is increased by implementing an island model into its evolutionary search mechanism. The proposed ...

  12. Multi-objective three stage design optimization for island microgrids

    International Nuclear Information System (INIS)

    Sachs, Julia; Sawodny, Oliver

    2016-01-01

    Highlights: • An enhanced multi-objective three stage design optimization for microgrids is given. • Use of an optimal control problem for the calculation of the optimal operation. • The inclusion of a detailed battery model with CC/CV charging control. • The determination of a representative profile with optimized number of days. • The proposed method finds its direct application in a design tool for microgids. - Abstract: Hybrid off-grid energy systems enable a cost efficient and reliable energy supply to rural areas around the world. The main potential for a low cost operation and uninterrupted power supply lies in the optimal sizing and operation of such microgrids. In particular, sudden variations in load demand or in the power supply from renewables underline the need for an optimally sized system. This paper presents an efficient multi-objective model based optimization approach for the optimal sizing of all components and the determination of the best power electronic layout. The presented method is divided into three optimization problems to minimize economic and environmental objectives. This design optimization includes detailed components models and an optimized energy dispatch strategy which enables the optimal design of the energy system with respect to an adequate control for the specific configuration. To significantly reduce the computation time without loss of accuracy, the presented method contains the determination of a representative load profile using a k-means clustering method. The k-means algorithm itself is embedded in an optimization problem for the calculation of the optimal number of clusters. The benefits in term of reduced computation time, inclusion of optimal energy dispatch and optimization of power electronic architecture, of the presented optimization method are illustrated using a case study.

  13. Design and Optimization of a Turbine Intake Structure

    Directory of Open Access Journals (Sweden)

    P. Fošumpaur

    2005-01-01

    Full Text Available The appropriate design of the turbine intake structure of a hydropower plant is based on assumptions about its suitable function, and the design will increase the total efficiency of operation. This paper deals with optimal design of the turbine structure of run-of-river hydropower plants. The study focuses mainly on optimization of the hydropower plant location with respect to the original river banks, and on the optimal design of a separating pier between the weir and the power plant. The optimal design of the turbine intake was determined with the use of 2-D mathematical modelling. A case study is performed for the optimal design of a turbine intake structure on the Nemen river in Belarus. 

  14. Distributed optimization for systems design : an augmented Lagrangian coordination method

    NARCIS (Netherlands)

    Tosserams, S.

    2008-01-01

    This thesis presents a coordination method for the distributed design optimization of engineering systems. The design of advanced engineering systems such as aircrafts, automated distribution centers, and microelectromechanical systems (MEMS) involves multiple components that together realize the

  15. Isotherms and kinetic study of ultrasound-assisted adsorption of malachite green and Pb2+ ions from aqueous samples by copper sulfide nanorods loaded on activated carbon: Experimental design optimization.

    Science.gov (United States)

    Sharifpour, Ebrahim; Khafri, Hossein Zare; Ghaedi, Mehrorang; Asfaram, Arash; Jannesar, Ramin

    2018-01-01

    Copper sulfide nanorods loaded on activated carbon (CuS-NRs-AC) was synthesized and used for simultaneous ultrasound-assisted adsorption of malachite green (MG) and Pb 2+ ions from aqueous solution. Following characterization of CuS-NRs-AC were investigated by SEM, EDX, TEM and XRD, the effects of pH (2.0-10), amount of adsorbent (0.003-0.011g), MG concentration (5-25mgL -1 ), Pb 2+ concentration (3-15mgL -1 ) and sonication time (1.5-7.5min) and their interactions on responses were investigated by central composite design (CCD) and response surface methodology. According to desirability function on the Design Expert optimum removal (99.4%±1.0 for MG and 68.3±1.8 for Pb 2+ ions) was obtained at pH 6.0, 0.009g CuS-NRs-AC, 6.0min mixing by sonication and 15 and 6mgL -1 for MG and Pb 2+ ions, respectively. High determination coefficient (R 2 >0.995), Pred-R 2 -value (>0.920) and Adju-R 2 -value (>0.985) all are good indication of best agreement between the experimental and design modelling. The adsorption kinetics follows the pseudo-second order model and adsorption isotherm follows the Langmuir model with maximum adsorption capacity of 145.98 and 47.892mgg -1 for MG and Pb 2+ ions, respectively. This adsorbent over short contact time is good choice for simultaneous removal of large content of both MG and Pb 2+ ions from wastewater sample. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Physics design and experimental study of tokamak divertor

    International Nuclear Information System (INIS)

    Yan Jiancheng; Gao Qingdi; Yan Longwen; Wang Mingxu; Deng Baiquan; Zhang Fu; Zhang Nianman; Ran Hong; Cheng Fayin; Tang Yiwu; Chen Xiaoping

    2007-06-01

    The divertor configuration of HL-2A tokamak is optimized, and the plasma performance in divertor is simulated with B2-code. The effects of collisionality on plasma-wall transition in the scrape-off layer of divertor are investigated, high performances of the divertor plasma in HL-2A are simulated, and a quasi- stationary RS operation mode is established with the plasma controlled by LHCD and NBI. HL-2A tokamak has been successfully operated in divertor configuration. The major parameters: plasma current I p =320 kA, toroidal field B t =2.2 T, plasma discharger duration T d =1580 ms ware achieved at the end of 2004. The preliminary experimental researches of advanced diverter have been carried out. Design studies of divertor target plate for high power density fusion reactor have been carried out, especially, the physical processes on the surface of flowing liquid lithium target plate. The exploration research of improving divertor ash removal efficiency and reducing tritium inventory resulting from applying the RF ponderomotive force potential is studied. The optimization structure design studies of FEB-E reactor divertor are performed. High flux thermal shock experiments were carried on tungsten and carbon based materials. Hot Isostatic Press (HIP) method was employed to bond tungsten to copper alloys. Electron beam simulated thermal fatigue tests were also carried out to W/Cu bondings. Thermal desorption and surface modification of He + implanted into tungsten have been studied. (authors)

  17. Electrode design optimization of lithium secondary batteries to enhance adhesion and deformation capabilities

    International Nuclear Information System (INIS)

    Jeong, Dongho; Lee, Jongsoo

    2014-01-01

    Safety, performance and lifetime of LSB (lithium secondary batteries) are affected by the adhesion of the active material to the electrode substance, and to the electrode deformation and the spring back limit in the electrode manufacturing process. This study explores the optimization process using decision tree analysis, an ANN (artificial neural network), and a multi-objective genetic algorithm. In the electrode design optimization, the objectives are to maximize the adhesion and to minimize the electrode deformation subjected to the allowable limit on the spring-back. Experimental data for use in design analysis and optimization is obtained via a measurement test. The decision tree analysis is first performed to extract major, effective parameters sensitive to adhesion force, electrode deformation and spring-back. The ANN-based approximate meta-models are then established for function approximations. The ANN-based causality analysis is further explored to determine dominant design variables for each of three design requirements for the optimization. A multi-objective optimization is finally conducted using ANN-based approximate meta-models. An optimized solution obtained from the numerical optimization process is compared with experimental data to verify the actual performance of the LSB in terms of physical and electro-chemical properties. - Highlights: • Electrode design for enhancing adhesion and electrode deformation performances. • Maximizing adhesion and minimizing deformation with allowable limit on spring-back. • Extraction of effective design parameters from data mining techniques. • Numerical optimization using experimental data of lithium secondary batteries. • Comparison of an optimized solution with an experimental result

  18. A highly selective sorbent for removal of Cr(VI) from aqueous solutions based on Fe{sub 3}O{sub 4}/poly(methyl methacrylate) grafted Tragacanth gum nanocomposite: Optimization by experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Sadeghi, Susan, E-mail: ssadeghi@birjand.ac.ir; Rad, Fatemeh Alavi; Moghaddam, Ali Zeraatkar

    2014-12-01

    In this work, poly(methyl methacrylate) grafted Tragacanth gum modified Fe{sub 3}O{sub 4} magnetic nanoparticles (P(MMA)-g-TG-MNs) were developed for the selective removal of Cr(VI) species from aqueous solutions in the presence of Cr(III). The sorbent was characterized by Fourier transform infrared (FTIR) spectroscopy, transmission electron microscopy (TEM), a vibrating sample magnetometer (VSM), and thermo-gravimetric analysis (TGA). A screening study on operational variables was performed using a two-level full factorial design. Based on the analysis of variance (ANOVA) with 95% confidence limit, the significant variables were found. The central composite design (CCD) has also been employed for statistical modeling and analysis of the effects and interactions of significant variables dealing with the Cr(VI) uptake process by the developed sorbent. The predicted optimal conditions were situated at a pH of 5.5, contact time of 3.4 h, and 3.0 g L{sup −1} dose. The Langmuir, Freundlich, and Temkin isotherm models were used to describe the equilibrium sorption of Cr(VI) by the absorbent, and the Langmuir isotherm showed the best concordance as an equilibrium model. The adsorption process was followed by a pseudo-second-order kinetic model. Thermodynamic investigations showed that the biosorption process was spontaneous and exothermic. - Highlights: • Fe3O4 nanoparticles were modified with Poly(methyl methacrylate) grafted Tragacanth gum • P(MMA)-g-TG -MNPs can preferentially adsorb Cr(VI) in the presence of Cr(III) • The effects of operational parameters on Cr(VI) removal were evaluated by RSM • Adsorption mechanism, kinetics, and isotherm have been explored • The sorbent was successfully used to remove Cr(VI) from different water samples.

  19. A highly selective sorbent for removal of Cr(VI) from aqueous solutions based on Fe3O4/poly(methyl methacrylate) grafted Tragacanth gum nanocomposite: Optimization by experimental design

    International Nuclear Information System (INIS)

    Sadeghi, Susan; Rad, Fatemeh Alavi; Moghaddam, Ali Zeraatkar

    2014-01-01

    In this work, poly(methyl methacrylate) grafted Tragacanth gum modified Fe 3 O 4 magnetic nanoparticles (P(MMA)-g-TG-MNs) were developed for the selective removal of Cr(VI) species from aqueous solutions in the presence of Cr(III). The sorbent was characterized by Fourier transform infrared (FTIR) spectroscopy, transmission electron microscopy (TEM), a vibrating sample magnetometer (VSM), and thermo-gravimetric analysis (TGA). A screening study on operational variables was performed using a two-level full factorial design. Based on the analysis of variance (ANOVA) with 95% confidence limit, the significant variables were found. The central composite design (CCD) has also been employed for statistical modeling and analysis of the effects and interactions of significant variables dealing with the Cr(VI) uptake process by the developed sorbent. The predicted optimal conditions were situated at a pH of 5.5, contact time of 3.4 h, and 3.0 g L −1 dose. The Langmuir, Freundlich, and Temkin isotherm models were used to describe the equilibrium sorption of Cr(VI) by the absorbent, and the Langmuir isotherm showed the best concordance as an equilibrium model. The adsorption process was followed by a pseudo-second-order kinetic model. Thermodynamic investigations showed that the biosorption process was spontaneous and exothermic. - Highlights: • Fe3O4 nanoparticles were modified with Poly(methyl methacrylate) grafted Tragacanth gum • P(MMA)-g-TG -MNPs can preferentially adsorb Cr(VI) in the presence of Cr(III) • The effects of operational parameters on Cr(VI) removal were evaluated by RSM • Adsorption mechanism, kinetics, and isotherm have been explored • The sorbent was successfully used to remove Cr(VI) from different water samples

  20. A procedure for multi-objective optimization of tire design parameters

    Directory of Open Access Journals (Sweden)

    Nikola Korunović

    2015-04-01

    Full Text Available The identification of optimal tire design parameters for satisfying different requirements, i.e. tire performance characteristics, plays an essential role in tire design. In order to improve tire performance characteristics, formulation and solving of multi-objective optimization problem must be performed. This paper presents a multi-objective optimization procedure for determination of optimal tire design parameters for simultaneous minimization of strain energy density at two distinctive zones inside the tire. It consists of four main stages: pre-analysis, design of experiment, mathematical modeling and multi-objective optimization. Advantage of the proposed procedure is reflected in the fact that multi-objective optimization is based on the Pareto concept, which enables design engineers to obtain a complete set of optimization solutions and choose a suitable tire design. Furthermore, modeling of the relationships between tire design parameters and objective functions based on multiple regression analysis minimizes computational and modeling effort. The adequacy of the proposed tire design multi-objective optimization procedure has been validated by performing experimental trials based on finite element method.

  1. Systematic design of acoustic devices by topology optimization

    DEFF Research Database (Denmark)

    Jensen, Jakob Søndergaard; Sigmund, Ole

    2005-01-01

    We present a method to design acoustic devices with topology optimization. The general algorithm is exemplified by the design of a reflection chamber that minimizes the transmission of acoustic waves in a specified frequency range.......We present a method to design acoustic devices with topology optimization. The general algorithm is exemplified by the design of a reflection chamber that minimizes the transmission of acoustic waves in a specified frequency range....

  2. On the design of experimental separation processes for maximum accuracy in the estimation of their parameters

    International Nuclear Information System (INIS)

    Volkman, Y.

    1980-07-01

    The optimal design of experimental separation processes for maximum accuracy in the estimation of process parameters is discussed. The sensitivity factor correlates the inaccuracy of the analytical methods with the inaccuracy of the estimation of the enrichment ratio. It is minimized according to the design parameters of the experiment and the characteristics of the analytical method

  3. On the design of compliant mechanisms using topology optimization

    DEFF Research Database (Denmark)

    Sigmund, Ole

    1997-01-01

    This paper presents a method for optimal design of compliant mechanism topologies. The method is based on continuum-type topology optimization techniques and finds the optimal compliant mechanism topology within a given design domain and a given position and direction of input and output forces....... By constraining the allowed displacement at the input port, it is possible to control the maximum stress level in the compliant mechanism. The ability of the design method to find a mechanism with complex output behavior is demonstrated by several examples. Some of the optimal mechanism topologies have been...... manufactured, both in macroscale (hand-size) made in Nylon, and in microscale (

  4. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    Science.gov (United States)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  5. Topology and boundary shape optimization as an integrated design tool

    Science.gov (United States)

    Bendsoe, Martin Philip; Rodrigues, Helder Carrico

    1990-01-01

    The optimal topology of a two dimensional linear elastic body can be computed by regarding the body as a domain of the plane with a high density of material. Such an optimal topology can then be used as the basis for a shape optimization method that computes the optimal form of the boundary curves of the body. This results in an efficient and reliable design tool, which can be implemented via common FEM mesh generator and CAD type input-output facilities.

  6. Experimental and numerical comparison of absorption optimization in small rooms

    DEFF Research Database (Denmark)

    Wincentz, Jakob Nygård; Garcia, Julian Martinez-Villalba; Jeong, Cheol-Ho

    2016-01-01

    A vast majority of modern music is recorded and produced in small control room environments of volumes of around 50 m3 . Several problems occur when controlling the room acoustics of such small spaces. First, the room modes will produce strong peaks and dips particularly at lower frequencies......, and even in the sweet spot position the listening experience can be easily deteriorated. Second, when designing or refurbishing small rooms it is hard to adequately predict the reverberation time by using Sabine’s formula due to highly non-diffuse conditions and using a statistical approach below......, boundary conditions, and phase information providing accuracy at low frequencies. Good agreements are found between measurements and simulations, confirming that FEM can be used as a design tool for optimizing absorption and acoustic parameters in small rooms...

  7. Optimal Design Solutions for Permanent Magnet Synchronous Machines

    Directory of Open Access Journals (Sweden)

    POPESCU, M.

    2011-11-01

    Full Text Available This paper presents optimal design solutions for reducing the cogging torque of permanent magnets synchronous machines. A first solution proposed in the paper consists in using closed stator slots that determines a nearly isotropic magnetic structure of the stator core, reducing the mutual attraction between permanent magnets and the slotted armature. To avoid complications in the windings manufacture technology the stator slots are closed using wedges made of soft magnetic composite materials. The second solution consists in properly choosing the combination of pole number and stator slots number that typically leads to a winding with fractional number of slots/pole/phase. The proposed measures for cogging torque reduction are analyzed by means of 2D/3D finite element models developed using the professional Flux software package. Numerical results are discussed and compared with experimental ones obtained by testing a PMSM prototype.

  8. An optimized Faraday cage design for electron beam current measurements

    International Nuclear Information System (INIS)

    Turner, J.N.; Hausner, G.G.; Parsons, D.F.

    1975-01-01

    A Faraday cage detector is described for measuring electron beam intensity for use with energies up to 1.2 Mev, with the present data taken at 100 keV. The design features a readily changeable limiting aperture and detector cup geometry, and a secondary electron suppression grid. The detection efficiency of the cage is shown to be limited only by primary backscatter through the detector solid angle of escape, which is optimized with respect to primary backscattered electrons and secondary electron escape. The geometry and stopping material of the detection cup are varied, and the results show that for maximum detection efficiency with carbon as the stopping mateiral, the solid angle of escape must be equal to or less than 0.05πsr. The experimental results are consistent within the +-2% accuracy of the detection electronics, and are not limited by the Faraday cage detection efficiency. (author)

  9. Optimal design of robust piezoelectric microgrippers undergoing large displacements

    DEFF Research Database (Denmark)

    Ruiz, D.; Sigmund, Ole

    2018-01-01

    Topology optimization combined with optimal design of electrodes is used to design piezoelectric microgrippers. Fabrication at micro-scale presents an important challenge: due to non-symmetrical lamination of the structures, out-of-plane bending spoils the behaviour of the grippers. Suppression...

  10. Design Optimization of Piles for Offshore Wind Turbine Jacket Foundations

    DEFF Research Database (Denmark)

    Sandal, Kasper; Zania, Varvara

    Numerical methods can optimize the pile design. The aim of this study is to automatically design optimal piles for offshore wind turbine jacket foundations (Figure 1). Pile mass is minimized with constraints on axial and lateral capacity. Results indicate that accurate knowledge about soil...

  11. Design and Optimization of Filament Wound Composite Pressure Vessels

    NARCIS (Netherlands)

    Zu, L.

    2012-01-01

    One of the most important issues for the design of filament-wound pressure vessels reflects on the determination of the most efficient meridian profiles and related fiber architectures, leading to optimal structural performance. To better understand the design and optimization of filament-wound

  12. Developing an Integrated Design Strategy for Chip Layout Optimization

    NARCIS (Netherlands)

    Wits, Wessel Willems; Jauregui Becker, Juan Manuel; van Vliet, Frank Edward; te Riele, G.J.

    2011-01-01

    This paper presents an integrated design strategy for chip layout optimization. The strategy couples both electric and thermal aspects during the conceptual design phase to improve chip performances; thermal management being one of the major topics. The layout of the chip circuitry is optimized

  13. GPU-accelerated CFD Simulations for Turbomachinery Design Optimization

    NARCIS (Netherlands)

    Aissa, M.H.

    2017-01-01

    Design optimization relies heavily on time-consuming simulations, especially when using gradient-free optimization methods. These methods require a large number of simulations in order to get a remarkable improvement over reference designs, which are nowadays based on the accumulated engineering

  14. Methodology for designing aircraft having optimal sound signatures

    NARCIS (Netherlands)

    Sahai, A.K.; Simons, D.G.

    2017-01-01

    This paper presents a methodology with which aircraft designs can be modified such that they produce optimal sound signatures on the ground. With optimal sound it is implied in this case sounds that are perceived as less annoying by residents living near airport vicinities. A novel design and

  15. Optimal Design of Modern Transformerless PV Inverter Topologies

    OpenAIRE

    Saridakis, Stefanos; Koutroulis, Eftichios; Blaabjerg, Frede

    2013-01-01

    The design optimization of H5, H6, neutral point clamped, active-neutral point clamped, and conergy-NPC transformerless photovoltaic (PV) inverters is presented in this paper. The components reliability in terms of the corresponding malfunctions, affecting the PV inverter maintenance cost during the operational lifetime period of the PV installation, is also considered in the optimization process. According to the results of the proposed design method, different optimal values of the PV inver...

  16. Optimized hardware design for the divertor remote handling control system

    Energy Technology Data Exchange (ETDEWEB)

    Saarinen, Hannu [Tampere University of Technology, Korkeakoulunkatu 6, 33720 Tampere (Finland)], E-mail: hannu.saarinen@tut.fi; Tiitinen, Juha; Aha, Liisa; Muhammad, Ali; Mattila, Jouni; Siuko, Mikko; Vilenius, Matti [Tampere University of Technology, Korkeakoulunkatu 6, 33720 Tampere (Finland); Jaervenpaeae, Jorma [VTT Systems Engineering, Tekniikankatu 1, 33720 Tampere (Finland); Irving, Mike; Damiani, Carlo; Semeraro, Luigi [Fusion for Energy, Josep Pla 2, Torres Diagonal Litoral B3, 08019 Barcelona (Spain)

    2009-06-15

    A key ITER maintenance activity is the exchange of the divertor cassettes. One of the major focuses of the EU Remote Handling (RH) programme has been the study and development of the remote handling equipment necessary for divertor exchange. The current major step in this programme involves the construction of a full scale physical test facility, namely DTP2 (Divertor Test Platform 2), in which to demonstrate and refine the RH equipment designs for ITER using prototypes. The major objective of the DTP2 project is the proof of concept studies of various RH devices, but is also important to define principles for standardizing control hardware and methods around the ITER maintenance equipment. This paper focuses on describing the control system hardware design optimization that is taking place at DTP2. Here there will be two RH movers, namely the Cassette Multifuctional Mover (CMM), Cassette Toroidal Mover (CTM) and assisting water hydraulic force feedback manipulators (WHMAN) located aboard each Mover. The idea here is to use common Real Time Operating Systems (RTOS), measurement and control IO-cards etc. for all maintenance devices and to standardize sensors and control components as much as possible. In this paper, new optimized DTP2 control system hardware design and some initial experimentation with the new DTP2 RH control system platform are presented. The proposed new approach is able to fulfil the functional requirements for both Mover and Manipulator control systems. Since the new control system hardware design has reduced architecture there are a number of benefits compared to the old approach. The simplified hardware solution enables the use of a single software development environment and a single communication protocol. This will result in easier maintainability of the software and hardware, less dependence on trained personnel, easier training of operators and hence reduced the development costs of ITER RH.

  17. Designing an optimally proportional inorganic scintillator

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Jai, E-mail: jai.singh@cdu.edu.au [School of Engineering and IT, B-Purple-12, Faculty of EHSE, Charles Darwin University, NT 0909 (Australia); Koblov, Alexander [School of Engineering and IT, B-Purple-12, Faculty of EHSE, Charles Darwin University, NT 0909 (Australia)

    2012-09-01

    The nonproportionality observed in the light yield of inorganic scintillators is studied theoretically as a function of the rates of bimolecular and Auger quenching processes occurring within the electron track initiated by a gamma- or X-ray photon incident on a scintillator. Assuming a cylindrical track, the influence of the track radius and concentration of excitations created within the track on the scintillator light yield is also studied. Analysing the calculated light yield a guideline for inventing an optimally proportional scintillator with optimal energy resolution is presented.

  18. Designing an optimally proportional inorganic scintillator

    International Nuclear Information System (INIS)

    Singh, Jai; Koblov, Alexander

    2012-01-01

    The nonproportionality observed in the light yield of inorganic scintillators is studied theoretically as a function of the rates of bimolecular and Auger quenching processes occurring within the electron track initiated by a gamma- or X-ray photon incident on a scintillator. Assuming a cylindrical track, the influence of the track radius and concentration of excitations created within the track on the scintillator light yield is also studied. Analysing the calculated light yield a guideline for inventing an optimally proportional scintillator with optimal energy resolution is presented.

  19. Structural optimization via a design space hierarchy

    Science.gov (United States)

    Vanderplaats, G. N.

    1976-01-01

    Mathematical programming techniques provide a general approach to automated structural design. An iterative method is proposed in which design is treated as a hierarchy of subproblems, one being locally constrained and the other being locally unconstrained. It is assumed that the design space is locally convex in the case of good initial designs and that the objective and constraint functions are continuous, with continuous first derivatives. A general design algorithm is outlined for finding a move direction which will decrease the value of the objective function while maintaining a feasible design. The case of one-dimensional search in a two-variable design space is discussed. Possible applications are discussed. A major feature of the proposed algorithm is its application to problems which are inherently ill-conditioned, such as design of structures for optimum geometry.

  20. Experimental and theoretical investigation of Stirling engine heater: Parametrical optimization

    International Nuclear Information System (INIS)

    Gheith, R.; Hachem, H.; Aloui, F.; Ben Nasrallah, S.

    2015-01-01

    Highlights: • A Stirling engine was investigated to optimize its operation and its performance. • The porous medium present the highest amount of heat exchanged in a Stirling engine. • The heater characteristics are determinant points to enhance the thermal exchange in Stirling engine. • All operation parameters influence the heater performances. • Thermal and exergy heater efficiencies are sensible to temperature and pressure. - Abstract: The aim of this work is to optimize γ Stirling engine performances with a special care given to the heater. This latter consists of 20 tubes in order to increase the exchange area between the working gas and the hot source. Different parameters were chosen to evaluate numerically and experimentally the heater. The selected four independent parameters are: heating temperature (300–500 °C), initial filling pressure (3–8 bar), cooling water flow rate (0.2–3 l/min) and frequency (2–7 Hz). The amount of energy exchanged in the heater is significantly influenced by the frequency and heating temperature but it is slightly enhanced with the increase in the cooling water flow rate. The thermal and the exergy efficiencies of the heater are very sensible to the temperature and pressure variations.

  1. Poly-optimization: a paradigm in engineering design in mechatronics

    Energy Technology Data Exchange (ETDEWEB)

    Tarnowski, Wojciech [Koszalin University of Technology, Department of Control and Driving Systems, Institute of Mechatronics, Nanotechnology and Vacuum Technique, Koszalin (Poland); Krzyzynski, Tomasz; Maciejewski, Igor; Oleskiewicz, Robert [Koszalin University of Technology, Department of Mechatronics and Applied Mechanics, Institute of Mechatronics, Nanotechnology and Vacuum Technique, Koszalin (Poland)

    2011-02-15

    The paper deals with the Engineering Design that is a general methodology of a design process. It is assumed that a designer has to solve a design task as an inverse problem in an iterative way. After each iteration, a decision should be taken on the information that is called a centre of integration in a systematic design system. For this purpose, poly-optimal solutions may be used. The poly-optimization is presented and contrasted against the Multi Attribute Decision Making, and a set of the poly-optimal solutions is defined. Then Mechatronics is defined and its characteristics given, to prove that mechatronic design process vitally needs CAD tools. Three examples are quoted to demonstrate a key role of the poly-optimization in the mechatronic design. (orig.)

  2. Design of experimental equipment at CRNL

    International Nuclear Information System (INIS)

    Godden, B.

    1976-01-01

    The Plant Design Division provides a design service to the research and development effort at CRNL. Severe constraints, both environmentally and spatially, are placed on the design of special equipment. Several examples are given. Finally, the use of automated drafting systems is described. (author)

  3. Optimizing Your K-5 Engineering Design Challenge

    Science.gov (United States)

    Coppola, Matthew Perkins; Merz, Alice H.

    2017-01-01

    Today, elementary school teachers continue to revisit old lessons and seek out new ones, especially in engineering. Optimization is the process by which an existing product or procedure is revised and refined. Drawn from the authors' experiences working directly with students in grades K-5 and their teachers and preservice teachers, the…

  4. Optimization Criteria and Sailplane Airfoil Design

    Czech Academy of Sciences Publication Activity Database

    Popelka, Lukáš; Matějka, Milan

    2007-01-01

    Roč. 30, č. 3 (2007), s. 74-78 ISSN 0744-8996 R&D Projects: GA AV ČR IAA2076403; GA AV ČR(CZ) IAA200760614 Institutional research plan: CEZ:AV0Z20760514 Keywords : aerodynamic optimization * airfoil Subject RIV: BK - Fluid Dynamics

  5. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    Science.gov (United States)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  6. Multi-objective optimization design method of radiation shielding

    International Nuclear Information System (INIS)

    Yang Shouhai; Wang Weijin; Lu Daogang; Chen Yixue

    2012-01-01

    Due to the shielding design goals of diversification and uncertain process of many factors, it is necessary to develop an optimization design method of intelligent shielding by which the shielding scheme selection will be achieved automatically and the uncertainties of human impact will be reduced. For economical feasibility to achieve a radiation shielding design for automation, the multi-objective genetic algorithm optimization of screening code which combines the genetic algorithm and discrete-ordinate method was developed to minimize the costs, size, weight, and so on. This work has some practical significance for gaining the optimization design of shielding. (authors)

  7. A Systematic Optimization Design Method for Complex Mechatronic Products Design and Development

    Directory of Open Access Journals (Sweden)

    Jie Jiang

    2018-01-01

    Full Text Available Designing a complex mechatronic product involves multiple design variables, objectives, constraints, and evaluation criteria as well as their nonlinearly coupled relationships. The design space can be very big consisting of many functional design parameters, structural design parameters, and behavioral design (or running performances parameters. Given a big design space and inexplicit relations among them, how to design a product optimally in an optimization design process is a challenging research problem. In this paper, we propose a systematic optimization design method based on design space reduction and surrogate modelling techniques. This method firstly identifies key design parameters from a very big design space to reduce the design space, secondly uses the identified key design parameters to establish a system surrogate model based on data-driven modelling principles for optimization design, and thirdly utilizes the multiobjective optimization techniques to achieve an optimal design of a product in the reduced design space. This method has been tested with a high-speed train design. With comparison to others, the research results show that this method is practical and useful for optimally designing complex mechatronic products.

  8. Site-specific design optimization of wind turbines

    DEFF Research Database (Denmark)

    Fuglsang, P.; Bak, C.; Schepers, J.G.

    2002-01-01

    This article reports results from a European project, where site characteristics were incorporated into the design process of wind turbines, to enable site-specific design. Two wind turbines of different concept were investigated at six different sites comprising normal flat terrain, offshore...... and complex terrain wind farms. Design tools based on numerical optimization and aeroelastic calculations were combined with a cost model to allow optimization for minimum cost of energy. Different scenarios were optimized ranging from modifications of selected individual components to the complete design...... of a new wind turbine. Both annual energy yield and design-determining loads depended on site characteristics, and this represented a potential for site-specific design. The maximum variation in annual energy yield was 37% and the maximum variation in blade root fatigue loads was 62%. Optimized site...

  9. QFood - Optimal design of food products

    DEFF Research Database (Denmark)

    Bech, Anne C.; Engelund, Erling; Juhl, Hans Jørn

    1994-01-01

    of Quality is described with special reference to the development of food products. 5. An MDS-based model for use in the evaluation of an optimal product is developed. The model is based on the profit function from classical micro-economic theory. The imputed price is defined as a function of a Customer...... Satisfaction Index which is inversely proportional to how ""close"" the product is to the consumer's ideal....

  10. Diffractive variable beam splitter: optimal design.

    Science.gov (United States)

    Borghi, R; Cincotti, G; Santarsiero, M

    2000-01-01

    The analytical expression of the phase profile of the optimum diffractive beam splitter with an arbitrary power ratio between the two output beams is derived. The phase function is obtained by an analytical optimization procedure such that the diffraction efficiency of the resulting optical element is the highest for an actual device. Comparisons are presented with the efficiency of a diffractive beam splitter specified by a sawtooth phase function and with the pertinent theoretical upper bound for this type of element.

  11. Optimized design for heavy mound venturi

    Directory of Open Access Journals (Sweden)

    Xing Futang

    2017-01-01

    Full Text Available The venturi scrubber is one of the most efficient gas cleaning devices for removal of contaminating particles in industrial flue-gas purification processes. The velocity of the gas entering the scrubber is one of the key factors influencing its dust-removal efficiency. In this study, the shapes of the heavy mound and tube wall are optimized, allowing the girth area to become linearly adjustable. The resulting uniformity of velocity distribution is verified numerically.

  12. Mass and overall optimization of radiator design

    Directory of Open Access Journals (Sweden)

    Shilo G. N.

    2011-04-01

    Full Text Available The models of finned radiator are formed by computing aided engineering systems. The relations between sizes of construction elements and boundaries of operability domain are obtained for radiators of minimal mass, minimal volume and minimal overall parameters. Iteration algorithm is used. The non-linear characteristics of weight functions and allowable input heat resistances of radiator are applied in the algorithm. Mass and overall parameters of standard and optimal radiator are defined by different strategies.

  13. Optimization of lining design in deep clays

    International Nuclear Information System (INIS)

    Rousset, G.; Bublitz, D.

    1989-01-01

    The main features of the mechanical behaviour of deep clay are time dependent effects and also the existence of a long term cohesion which may be taken into account for dimensioning galleries. In this text, a lining optimization test is presented. It concerns a gallery driven in deep clay, 230 m. deep, at Mol (Belgium). We show that sliding rib lining gives both: - an optimal tunnel face advance speed, a minimal closure of the gallery wall before setting the lining and therefore less likelihood of failure developing inside the rock mass. - limitation of the length of the non-lined part of the gallery. The chosen process allows on one hand the preservation of the rock mass integrity, and, on the other, use of the confinement effect to allow closure under high average stress conditions; this process can be considered as an optimal application of the convergence-confinement method. An important set of measurement devices is then presented along with results obtained for one year's operation. We show in particular that stress distribution in the lining is homogeneous and that the sliding limit can be measured with high precision

  14. Revised design for the Tokamak experimental power reactor

    International Nuclear Information System (INIS)

    Stacey, W.M. Jr.; Abdou, M.A.; Brooks, J.N.

    1977-03-01

    A new, preliminary design has been identified for the tokamak experimental power reactor (EPR). The revised EPR design is simpler, more compact, less expensive and has somewhat better performance characteristics than the previous design, yet retains many of the previously developed design concepts. This report summarizes the principle features of the new EPR design, including performance and cost

  15. Topology optimization problems with design-dependent sets of constraints

    DEFF Research Database (Denmark)

    Schou, Marie-Louise Højlund

    Topology optimization is a design tool which is used in numerous fields. It can be used whenever the design is driven by weight and strength considerations. The basic concept of topology optimization is the interpretation of partial differential equation coefficients as effective material...... properties and designing through changing these coefficients. For example, consider a continuous structure. Then the basic concept is to represent this structure by small pieces of material that are coinciding with the elements of a finite element model of the structure. This thesis treats stress constrained...... structural topology optimization problems. For such problems a stress constraint for an element should only be present in the optimization problem when the structural design variable corresponding to this element has a value greater than zero. We model the stress constrained topology optimization problem...

  16. Lightweight design of a vertical articulated robot using topology optimization

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Seong Ki; Hong, Jung Ki; Jang, Gang Won [Sejong Univ., Seoul (Korea, Republic of); Kim, Tae Hyun; Park, Jin Kyun; Kim, Sang Hyun [Hyundai Heavy Industries Co., Ltd., Daejeon (Korea, Republic of)

    2012-12-15

    Topology optimization is applied for the lightweight design of three main parts of a vertical articulated robot: a base frame, a lower and a upper frame. Design domains for optimization are set as large solid regions that completely embrace the original parts, which are discretized by using three dimensional solid elements. Design variables are parameterized one to one to the material properties of each element by using the SIMP method. The objective of optimization is set as the multi objective form combining the natural frequencies and mean compliances of a structure for which load steps of interest are selected from the multibody dynamics analysis of a robot. The obtained results of topology optimization are post processed to designs favorable to manufacturability for casting process. The final optimized results are 11.0% (base frame), 12.0% (lower frame) and 10.0% (upper frame) lighter with similar or even higher static and dynamic stiffnesses than the original models.

  17. Optimal design and dynamic impact tests of removable bollards

    Science.gov (United States)

    Chen, Suwen; Liu, Tianyi; Li, Guoqiang; Liu, Qing; Sun, Jianyun

    2017-10-01

    Anti-ram bollard systems, which are installed around buildings and infrastructure, can prevent unauthorized vehicles from entering, maintain distance from vehicle-borne improvised explosive devices (VBIED) and reduce the corresponding damage. Compared with a fixed bollard system, a removable bollard system provides more flexibility as it can be removed when needed. This paper first proposes a new type of K4-rated removable anti-ram bollard system. To simulate the collision of a vehicle hitting the bollard system, a finite element model was then built and verified through comparison of numerical simulation results and existing experimental results. Based on the orthogonal design method, the factors influencing the safety and economy of this proposed system were examined and sorted according to their importance. An optimal design scheme was then produced. Finally, to validate the effectiveness of the proposed design scheme, four dynamic impact tests, including two front impact tests and two side impact tests, have been conducted according to BSI Specifications. The residual rotation angles of the specimen are smaller than 30º and satisfy the requirements of the BSI Specification.

  18. Software for CATV Design and Frequency Plan Optimization

    Directory of Open Access Journals (Sweden)

    O. Hala

    2007-09-01

    Full Text Available The paper deals with the structure of a software medium used for design and sub-optimization of frequency plan in CATV networks, their description and design method. The software performance is described and a simple design example of energy balance of a simplified CATV network is given. The software was created in programming environment called Delphi and local optimization was made in Matlab.

  19. Helium gas turbine conceptual design by genetic/gradient optimization

    International Nuclear Information System (INIS)

    Yang, Long; Yu, Suyuan

    2003-01-01

    Helium gas turbine is the key component of the power conversion system for direct cycle High Temperature Gas-cooled Reactors (HTGR), of which an optimal design is essential for high efficiency. Gas turbine design currently is a multidisciplinary process in which the relationships between constraints, objective functions and variables are very noisy. Due to the ever-increasing complexity of the process, it has becomes very hard for the engineering designer to foresee the consequences of changing certain parts. With classic design procedures which depend on adaptation to baseline design, this problem is usually averted by choosing a large number of design variables based on the engineer's judgment or experience in advance, then reaching a solution through iterative computation and modification. This, in fact, leads to a reduction of the degree of freedom of the design problem, and therefore to a suboptimal design. Furthermore, helium is very different in thermal properties from normal gases; it is uncertain whether the operation experiences of a normal gas turbine could be used in the conceptual design of a helium gas turbine. Therefore, it is difficult to produce an optimal design with the general method of adaptation to baseline. Since their appearance in the 1970s, Genetic algorithms (GAs) have been broadly used in many research fields due to their robustness. GAs have also been used recently in the design and optimization of turbo-machines. Researchers at the General Electronic Company (GE) developed an optimization software called Engineous, and used GAs in the basic design and optimization of turbines. The ITOP study group from Xi'an Transportation University also did some work on optimization of transonic turbine blades. However, since GAs do not have a rigorous theory base, many problems in utilities have arisen, such as premature convergence and uncertainty; the GA doesn't know how to locate the optimal design, and doesn't even know if the optimal solution

  20. Multidisciplinary Design Optimization of a Swash-Plate Axial Piston Pump

    Directory of Open Access Journals (Sweden)

    Guangjun Liu

    2016-12-01

    Full Text Available This work proposes an MDO (multidisciplinary design optimization procedure for a swash-plate axial piston pump based on co-simulation and integrated optimization. The integrated hydraulic-mechanical model of the pump is built to reflect its actual performance, and a hydraulic-mechanical co-simulation is conducted through data exchange between different domains. The flow ripple of the pump is optimized by using a MDO procedure. A CFD (Computational Fluid Dynamics simulation of the pump’s flow field is done, which shows that the hydrodynamic shock of the pump is improved after optimization. To verify the MDO effect, an experimental system is established to test the optimized piston pump. Experimental results show that the simulated and experimental curves are similar. The flow ripple is improved by the MDO procedure. The peak of the pressure curve is lower than before optimization, and the pressure pulsation is reduced by 0.21 MPa, which shows that the pressure pulsation is improved with the decreasing of the flow ripple. Comparing the experimental and simulation results shows that MDO method is effective and feasible in the optimization design of the pump.