WorldWideScience

Sample records for developing optimal input

  1. Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles

    Science.gov (United States)

    Morelli, Eugene A.; Klein, Vladislav

    1990-01-01

    A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.

  2. Simplex-based optimization of numerical and categorical inputs in early bioprocess development: Case studies in HT chromatography.

    Science.gov (United States)

    Konstantinidis, Spyridon; Titchener-Hooker, Nigel; Velayudhan, Ajoy

    2017-08-01

    Bioprocess development studies often involve the investigation of numerical and categorical inputs via the adoption of Design of Experiments (DoE) techniques. An attractive alternative is the deployment of a grid compatible Simplex variant which has been shown to yield optima rapidly and consistently. In this work, the method is combined with dummy variables and it is deployed in three case studies wherein spaces are comprised of both categorical and numerical inputs, a situation intractable by traditional Simplex methods. The first study employs in silico data and lays out the dummy variable methodology. The latter two employ experimental data from chromatography based studies performed with the filter-plate and miniature column High Throughput (HT) techniques. The solute of interest in the former case study was a monoclonal antibody whereas the latter dealt with the separation of a binary system of model proteins. The implemented approach prevented the stranding of the Simplex method at local optima, due to the arbitrary handling of the categorical inputs, and allowed for the concurrent optimization of numerical and categorical, multilevel and/or dichotomous, inputs. The deployment of the Simplex method, combined with dummy variables, was therefore entirely successful in identifying and characterizing global optima in all three case studies. The Simplex-based method was further shown to be of equivalent efficiency to a DoE-based approach, represented here by D-Optimal designs. Such an approach failed, however, to both capture trends and identify optima, and led to poor operating conditions. It is suggested that the Simplex-variant is suited to development activities involving numerical and categorical inputs in early bioprocess development. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. On the Nature of the Input in Optimality Theory

    DEFF Research Database (Denmark)

    Heck, Fabian; Müller, Gereon; Vogel, Ralf

    2002-01-01

    The input has two main functions in optimality theory (Prince and Smolensky 1993). First, the input defines the candidate set, in other words it determines which output candidates compete for optimality, and which do not. Second, the input is referred to by faithfulness constraints that prohibit...... output candidates from deviating from specifications in the input. Whereas there is general agreement concerning the relevance of the input in phonology, the nature of the input in syntax is notoriously unclear. In this article, we show that the input should not be taken to define syntactic candidate...... and syntax is due to a basic, irreducible difference between these two components of grammar: Syntax is an information preserving system, phonology is not....

  4. Input and language development in bilingually developing children.

    Science.gov (United States)

    Hoff, Erika; Core, Cynthia

    2013-11-01

    Language skills in young bilingual children are highly varied as a result of the variability in their language experiences, making it difficult for speech-language pathologists to differentiate language disorder from language difference in bilingual children. Understanding the sources of variability in bilingual contexts and the resulting variability in children's skills will help improve language assessment practices by speech-language pathologists. In this article, we review literature on bilingual first language development for children under 5 years of age. We describe the rate of development in single and total language growth, we describe effects of quantity of input and quality of input on growth, and we describe effects of family composition on language input and language growth in bilingual children. We provide recommendations for language assessment of young bilingual children and consider implications for optimizing children's dual language development. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  5. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  6. Distributed Optimal Consensus Control for Multiagent Systems With Input Delay.

    Science.gov (United States)

    Zhang, Huaipin; Yue, Dong; Zhao, Wei; Hu, Songlin; Dou, Chunxia; Huaipin Zhang; Dong Yue; Wei Zhao; Songlin Hu; Chunxia Dou; Hu, Songlin; Zhang, Huaipin; Dou, Chunxia; Yue, Dong; Zhao, Wei

    2018-06-01

    This paper addresses the problem of distributed optimal consensus control for a continuous-time heterogeneous linear multiagent system subject to time varying input delays. First, by discretization and model transformation, the continuous-time input-delayed system is converted into a discrete-time delay-free system. Two delicate performance index functions are defined for these two systems. It is shown that the performance index functions are equivalent and the optimal consensus control problem of the input-delayed system can be cast into that of the delay-free system. Second, by virtue of the Hamilton-Jacobi-Bellman (HJB) equations, an optimal control policy for each agent is designed based on the delay-free system and a novel value iteration algorithm is proposed to learn the solutions to the HJB equations online. The proposed adaptive dynamic programming algorithm is implemented on the basis of a critic-action neural network (NN) structure. Third, it is proved that local consensus errors of the two systems and weight estimation errors of the critic-action NNs are uniformly ultimately bounded while the approximated control policies converge to their target values. Finally, two simulation examples are presented to illustrate the effectiveness of the developed method.

  7. Optimization of Quantum-state-preserving Frequency Conversion by Changing the Input Signal

    DEFF Research Database (Denmark)

    Andersen, Lasse Mejling; Reddy, D. V.; McKinstrie, C. J.

    We optimize frequency conversion based on four-wave mixing by using the input modes of the system. We find a 10-25 % higher conversion efficiency relative to a pump-shaped input signal.......We optimize frequency conversion based on four-wave mixing by using the input modes of the system. We find a 10-25 % higher conversion efficiency relative to a pump-shaped input signal....

  8. Optimal control of LQR for discrete time-varying systems with input delays

    Science.gov (United States)

    Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng

    2018-04-01

    In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.

  9. Optimizing microwave photodetection: input-output theory

    Science.gov (United States)

    Schöndorf, M.; Govia, L. C. G.; Vavilov, M. G.; McDermott, R.; Wilhelm, F. K.

    2018-04-01

    High fidelity microwave photon counting is an important tool for various areas from background radiation analysis in astronomy to the implementation of circuit quantum electrodynamic architectures for the realization of a scalable quantum information processor. In this work we describe a microwave photon counter coupled to a semi-infinite transmission line. We employ input-output theory to examine a continuously driven transmission line as well as traveling photon wave packets. Using analytic and numerical methods, we calculate the conditions on the system parameters necessary to optimize measurement and achieve high detection efficiency. With this we can derive a general matching condition depending on the different system rates, under which the measurement process is optimal.

  10. Developing optimal input design strategies in cancer systems biology with applications to microfluidic device engineering.

    Science.gov (United States)

    Menolascina, Filippo; Bellomo, Domenico; Maiwald, Thomas; Bevilacqua, Vitoantonio; Ciminelli, Caterina; Paradiso, Angelo; Tommasi, Stefania

    2009-10-15

    Mechanistic models are becoming more and more popular in Systems Biology; identification and control of models underlying biochemical pathways of interest in oncology is a primary goal in this field. Unfortunately the scarce availability of data still limits our understanding of the intrinsic characteristics of complex pathologies like cancer: acquiring information for a system understanding of complex reaction networks is time consuming and expensive. Stimulus response experiments (SRE) have been used to gain a deeper insight into the details of biochemical mechanisms underlying cell life and functioning. Optimisation of the input time-profile, however, still remains a major area of research due to the complexity of the problem and its relevance for the task of information retrieval in systems biology-related experiments. We have addressed the problem of quantifying the information associated to an experiment using the Fisher Information Matrix and we have proposed an optimal experimental design strategy based on evolutionary algorithm to cope with the problem of information gathering in Systems Biology. On the basis of the theoretical results obtained in the field of control systems theory, we have studied the dynamical properties of the signals to be used in cell stimulation. The results of this study have been used to develop a microfluidic device for the automation of the process of cell stimulation for system identification. We have applied the proposed approach to the Epidermal Growth Factor Receptor pathway and we observed that it minimises the amount of parametric uncertainty associated to the identified model. A statistical framework based on Monte-Carlo estimations of the uncertainty ellipsoid confirmed the superiority of optimally designed experiments over canonical inputs. The proposed approach can be easily extended to multiobjective formulations that can also take advantage of identifiability analysis. Moreover, the availability of fully automated

  11. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    Science.gov (United States)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input

  12. A policy iteration approach to online optimal control of continuous-time constrained-input systems.

    Science.gov (United States)

    Modares, Hamidreza; Naghibi Sistani, Mohammad-Bagher; Lewis, Frank L

    2013-09-01

    This paper is an effort towards developing an online learning algorithm to find the optimal control solution for continuous-time (CT) systems subject to input constraints. The proposed method is based on the policy iteration (PI) technique which has recently evolved as a major technique for solving optimal control problems. Although a number of online PI algorithms have been developed for CT systems, none of them take into account the input constraints caused by actuator saturation. In practice, however, ignoring these constraints leads to performance degradation or even system instability. In this paper, to deal with the input constraints, a suitable nonquadratic functional is employed to encode the constraints into the optimization formulation. Then, the proposed PI algorithm is implemented on an actor-critic structure to solve the Hamilton-Jacobi-Bellman (HJB) equation associated with this nonquadratic cost functional in an online fashion. That is, two coupled neural network (NN) approximators, namely an actor and a critic are tuned online and simultaneously for approximating the associated HJB solution and computing the optimal control policy. The critic is used to evaluate the cost associated with the current policy, while the actor is used to find an improved policy based on information provided by the critic. Convergence to a close approximation of the HJB solution as well as stability of the proposed feedback control law are shown. Simulation results of the proposed method on a nonlinear CT system illustrate the effectiveness of the proposed approach. Copyright © 2013 ISA. All rights reserved.

  13. Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty

    Science.gov (United States)

    Nguyen, Nhan T.

    2012-01-01

    This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.

  14. Optimal Input Strategy for Plug and Play Process Control Systems

    DEFF Research Database (Denmark)

    Kragelund, Martin Nygaard; Leth, John-Josef; Wisniewski, Rafal

    2010-01-01

    This paper considers the problem of optimal operation of a plant, which goal is to maintain production at minimum cost. The system considered in this work consists of a joined plant and redundant input systems. It is assumed that each input system contributes to a flow of goods into the joined pa...... the performance of the plant. The results are applied to a coal fired power plant where an additional new fuel system, gas, becomes available....

  15. Input-output interactions and optimal monetary policy

    DEFF Research Database (Denmark)

    Petrella, Ivan; Santoro, Emiliano

    2011-01-01

    This paper deals with the implications of factor demand linkages for monetary policy design in a two-sector dynamic general equilibrium model. Part of the output of each sector serves as a production input in both sectors, in accordance with a realistic input–output structure. Strategic...... complementarities induced by factor demand linkages significantly alter the transmission of shocks and amplify the loss of social welfare under optimal monetary policy, compared to what is observed in standard two-sector models. The distinction between value added and gross output that naturally arises...... in this context is of key importance to explore the welfare properties of the model economy. A flexible inflation targeting regime is close to optimal only if the central bank balances inflation and value added variability. Otherwise, targeting gross output variability entails a substantial increase in the loss...

  16. On Optimal Input Design for Feed-forward Control

    OpenAIRE

    Hägg, Per; Wahlberg, Bo

    2013-01-01

    This paper considers optimal input design when the intended use of the identified model is to construct a feed-forward controller based on measurable disturbances. The objective is to find a minimum power excitation signal to be used in a system identification experiment, such that the corresponding model-based feed-forward controller guarantees, with a given probability, that the variance of the output signal is within given specifications. To start with, some low order model problems are an...

  17. Robust input design for nonlinear dynamic modeling of AUV.

    Science.gov (United States)

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Optimally decoding the input rate from an observation of the interspike intervals

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, University of Sussex at Brighton (United Kingdom) and Computational Neuroscience Laboratory, Babraham Institute, Cambridge (United Kingdom)]. E-mail: jf218@cam.ac.uk

    2001-09-21

    A neuron extensively receives both inhibitory and excitatory inputs. What is the ratio r between these two types of input so that the neuron can most accurately read out input information (rate)? We explore the issue in this paper provided that the neuron is an ideal observer - decoding the input information with the attainment of the Cramer-Rao inequality bound. It is found that, in general, adding certain amounts of inhibitory inputs to a neuron improves its capability of accurately decoding the input information. By calculating the Fisher information of an integrate-and-fire neuron, we determine the optimal ratio r for decoding the input information from an observation of the efferent interspike intervals. Surprisingly, the Fisher information can be zero for certain values of the ratio, seemingly implying that it is impossible to read out the encoded information at these values. By analysing the maximum likelihood estimate of the input information, it is concluded that the input information is in fact most easily estimated at the points where the Fisher information vanishes. (author)

  19. Optimal Control of a PEM Fuel Cell for the Inputs Minimization

    Directory of Open Access Journals (Sweden)

    José de Jesús Rubio

    2014-01-01

    Full Text Available The trajectory tracking problem of a proton exchange membrane (PEM fuel cell is considered. To solve this problem, an optimal controller is proposed. The optimal technique has the objective that the system states should reach the desired trajectories while the inputs are minimized. The proposed controller uses the Hamilton-Jacobi-Bellman method where its Riccati equation is considered as an adaptive function. The effectiveness of the proposed technique is verified by two simulations.

  20. Workflow Optimization for Tuning Prostheses with High Input Channel

    Science.gov (United States)

    2017-10-01

    of Specific Aim 1 by driving a commercially available two DoF wrist and single DoF hand. The high -level control system will provide analog signals...AWARD NUMBER: W81XWH-16-1-0767 TITLE: Workflow Optimization for Tuning Prostheses with High Input Channel PRINCIPAL INVESTIGATOR: Daniel Merrill...Unlimited The views, opinions and/or findings contained in this report are those of the author(s) and should not be construed as an official Department

  1. Optimal testing input sets for reduced diagnosis time of nuclear power plant digital electronic circuits

    International Nuclear Information System (INIS)

    Kim, D.S.; Seong, P.H.

    1994-01-01

    This paper describes the optimal testing input sets required for the fault diagnosis of the nuclear power plant digital electronic circuits. With the complicated systems such as very large scale integration (VLSI), nuclear power plant (NPP), and aircraft, testing is the major factor of the maintenance of the system. Particularly, diagnosis time grows quickly with the complexity of the component. In this research, for reduce diagnosis time the authors derived the optimal testing sets that are the minimal testing sets required for detecting the failure and for locating of the failed component. For reduced diagnosis time, the technique presented by Hayes fits best for the approach to testing sets generation among many conventional methods. However, this method has the following disadvantages: (a) it considers only the simple network (b) it concerns only whether the system is in failed state or not and does not provide the way to locate the failed component. Therefore the authors have derived the optimal testing input sets that resolve these problems by Hayes while preserving its advantages. When they applied the optimal testing sets to the automatic fault diagnosis system (AFDS) which incorporates the advanced fault diagnosis method of artificial intelligence technique, they found that the fault diagnosis using the optimal testing sets makes testing the digital electronic circuits much faster than that using exhaustive testing input sets; when they applied them to test the Universal (UV) Card which is a nuclear power plant digital input/output solid state protection system card, they reduced the testing time up to about 100 times

  2. Full-order optimal compensators for flow control: the multiple inputs case

    Science.gov (United States)

    Semeraro, Onofrio; Pralits, Jan O.

    2018-03-01

    Flow control has been the subject of numerous experimental and theoretical works. We analyze full-order, optimal controllers for large dynamical systems in the presence of multiple actuators and sensors. The full-order controllers do not require any preliminary model reduction or low-order approximation: this feature allows us to assess the optimal performance of an actuated flow without relying on any estimation process or further hypothesis on the disturbances. We start from the original technique proposed by Bewley et al. (Meccanica 51(12):2997-3014, 2016. https://doi.org/10.1007/s11012-016-0547-3), the adjoint of the direct-adjoint (ADA) algorithm. The algorithm is iterative and allows bypassing the solution of the algebraic Riccati equation associated with the optimal control problem, typically infeasible for large systems. In this numerical work, we extend the ADA iteration into a more general framework that includes the design of controllers with multiple, coupled inputs and robust controllers (H_{∞} methods). First, we demonstrate our results by showing the analytical equivalence between the full Riccati solutions and the ADA approximations in the multiple inputs case. In the second part of the article, we analyze the performance of the algorithm in terms of convergence of the solution, by comparing it with analogous techniques. We find an excellent scalability with the number of inputs (actuators), making the method a viable way for full-order control design in complex settings. Finally, the applicability of the algorithm to fluid mechanics problems is shown using the linearized Kuramoto-Sivashinsky equation and the Kármán vortex street past a two-dimensional cylinder.

  3. Second-order Optimality Conditions for Optimal Control of the Primitive Equations of the Ocean with Periodic Inputs

    International Nuclear Information System (INIS)

    Tachim Medjo, T.

    2011-01-01

    We investigate in this article the Pontryagin's maximum principle for control problem associated with the primitive equations (PEs) of the ocean with periodic inputs. We also derive a second-order sufficient condition for optimality. This work is closely related to Wang (SIAM J. Control Optim. 41(2):583-606, 2002) and He (Acta Math. Sci. Ser. B Engl. Ed. 26(4):729-734, 2006), in which the authors proved similar results for the three-dimensional Navier-Stokes (NS) systems.

  4. The optimal input optical pulse shape for the self-phase modulation based chirp generator

    Science.gov (United States)

    Zachinyaev, Yuriy; Rumyantsev, Konstantin

    2018-04-01

    The work is aimed to obtain the optimal shape of the input optical pulse for the proper functioning of the self-phase modulation based chirp generator allowing to achieve high values of chirp frequency deviation. During the research, the structure of the device based on self-phase modulation effect using has been analyzed. The influence of the input optical pulse shape of the transmitting optical module on the chirp frequency deviation has been studied. The relationship between the frequency deviation of the generated chirp and frequency linearity for the three options for implementation of the pulse shape has been also estimated. The results of research are related to the development of the theory of radio processors based on fiber-optic structures and can be used in radars, secure communications, geolocation and tomography.

  5. Input and output constraints affecting irrigation development

    Science.gov (United States)

    Schramm, G.

    1981-05-01

    In many of the developing countries the expansion of irrigated agriculture is used as a major development tool for bringing about increases in agricultural output, rural economic growth and income distribution. Apart from constraints imposed by water availability, the major limitations considered to any acceleration of such programs are usually thought to be those of costs and financial resources. However, as is shown on the basis of empirical data drawn from Mexico, in reality the feasibility and effectiveness of such development programs is even more constrained by the lack of specialized physical and human factors on the input and market limitations on the output side. On the input side, the limited availability of complementary factors such as, for example, truly functioning credit systems for small-scale farmers or effective agricultural extension services impose long-term constraints on development. On the output side the limited availability, high risk, and relatively slow growth of markets for high-value crops sharply reduce the usually hoped-for and projected profitable crop mix that would warrant the frequently high costs of irrigation investments. Three conclusions are drawn: (1) Factors in limited supply have to be shadow-priced to reflect their high opportunity costs in alternative uses. (2) Re-allocation of financial resources from immediate construction of projects to longer-term increase in the supply of scarce, highly-trained manpower resources are necessary in order to optimize development over time. (3) Inclusion of high-value, high-income producing crops in the benefit-cost analysis of new projects is inappropriate if these crops could potentially be grown in already existing projects.

  6. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  7. Consideration of Optimal Input on Semi-Active Shock Control System

    Science.gov (United States)

    Kawashima, Takeshi

    In press working, unidirectional transmission of mechanical energy is expected in order to maximize the life of the dies. To realize this transmission, the author has developed a shock control system based on the sliding mode control technique. The controller makes a collision-receiving object effectively deform plastically by adjusting the force of the actuator inserted between the colliding objects, while the deformation of the colliding object is held at the necessity minimum. However, the actuator has to generate a large force corresponding to the impulsive force. Therefore, development of such an actuator is a formidable challenge. The author has proposed a semi-active shock control system in which the impulsive force is adjusted by a brake mechanism, although the system exhibits inferior performance. Thus, the author has also designed an actuator using a friction device for semi-active shock control, and proposed an active seatbelt system as an application. The effectiveness has been confirmed by a numerical simulation and model experiment. In this study, the optimal deformation change of the colliding object is theoretically examined in the case that the collision-receiving object has perfect plasticity and the colliding object has perfect elasticity. As a result, the optimal input condition is obtained so that the ratio of the maximum deformation of the collision-receiving object to the maximum deformation of the colliding object becomes the maximum. Additionally, the energy balance is examined.

  8. On the input distribution and optimal beamforming for the MISO VLC wiretap channel

    KAUST Repository

    Arfaoui, Mohamed Amine; Rezki, Zouheir; Ghrayeb, Ali; Alouini, Mohamed-Slim

    2017-01-01

    We investigate in this paper the achievable secrecy rate of the multiple-input single-output (MISO) visible light communication (VLC) Gaussian wiretap channel with single user and single eavesdropper. We consider the cases when the location of eavesdropper is known or unknown to the transmitter. In the former case, we derive the optimal beamforming in closed form, subject to constrained inputs. In the latter case, we apply robust beamforming. Furthermore, we study the achievable secrecy rate when the input follows the truncated generalized normal (TGN) distribution. We present several examples which demonstrate the substantial improvements in the secrecy rates achieved by the proposed techniques.

  9. On the input distribution and optimal beamforming for the MISO VLC wiretap channel

    KAUST Repository

    Arfaoui, Mohamed Amine

    2017-05-12

    We investigate in this paper the achievable secrecy rate of the multiple-input single-output (MISO) visible light communication (VLC) Gaussian wiretap channel with single user and single eavesdropper. We consider the cases when the location of eavesdropper is known or unknown to the transmitter. In the former case, we derive the optimal beamforming in closed form, subject to constrained inputs. In the latter case, we apply robust beamforming. Furthermore, we study the achievable secrecy rate when the input follows the truncated generalized normal (TGN) distribution. We present several examples which demonstrate the substantial improvements in the secrecy rates achieved by the proposed techniques.

  10. Design optimization of radial flux permanent magnetwind generator for highest annual energy input and lower magnet volumes

    Energy Technology Data Exchange (ETDEWEB)

    Faiz, J.; Rajabi-Sebdani, M.; Ebrahimi, B. M. (Univ. of Tehran, Tehran (Iran)); Khan, M. A. (Univ. of Cape Town, Cape Town (South Africa))

    2008-07-01

    This paper presents a multi-objective optimization method to maximize annual energy input (AEI) and minimize permanent magnet (PM) volume in use. For this purpose, the analytical model of the machine is utilized. Effects of generator specifications on the annual energy input and PM volume are then investigated. Permanent magnet synchronous generator (PMSG) parameters and dimensions are then optimized using genetic algorithm incorporated with an appropriate objective function. The results show an enhancement in PMSG performance. Finally 2D time stepping finite element method (2D TSFE) is used to verify the analytical results. Comparison of the results validates the optimization method

  11. Reinforcement learning for adaptive optimal control of unknown continuous-time nonlinear systems with input constraints

    Science.gov (United States)

    Yang, Xiong; Liu, Derong; Wang, Ding

    2014-03-01

    In this paper, an adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem of constrained-input continuous-time nonlinear systems in the presence of nonlinearities with unknown structures. Two different types of neural networks (NNs) are employed to approximate the Hamilton-Jacobi-Bellman equation. That is, an recurrent NN is constructed to identify the unknown dynamical system, and two feedforward NNs are used as the actor and the critic to approximate the optimal control and the optimal cost, respectively. Based on this framework, the action NN and the critic NN are tuned simultaneously, without the requirement for the knowledge of system drift dynamics. Moreover, by using Lyapunov's direct method, the weights of the action NN and the critic NN are guaranteed to be uniformly ultimately bounded, while keeping the closed-loop system stable. To demonstrate the effectiveness of the present approach, simulation results are illustrated.

  12. Improved quality of input data for maintenance optimization using expert judgment

    International Nuclear Information System (INIS)

    Oien, Knut

    1998-01-01

    Most maintenance optimization models need an estimate of the so-called 'naked' failure rate function as input. In practice it is very difficult to estimate the 'naked' failure rate, because overhauls and other preventive maintenance actions tend to 'corrupt' the recorded lifelengths. The purpose of this paper is to stress the importance of utilizing the knowledge of maintenance engineers, i.e., expert judgment, in addition to recorded equipment lifelengths, in order to get credible input data. We have shown that without utilizing expert judgment, the estimated mean time to failure may be strongly biased, often by a factor of 2-3, depending on the life distribution that is assumed. We recommend including a simple question about the mean remaining lifelength on the work-order forms. By this approach the knowledge of maintenance engineers may be incorporated in a simple and cost-effective way

  13. Adaptive optimal control of unknown constrained-input systems using policy iteration and neural networks.

    Science.gov (United States)

    Modares, Hamidreza; Lewis, Frank L; Naghibi-Sistani, Mohammad-Bagher

    2013-10-01

    This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example.

  14. Optimization model of peach production relevant to input energies – Yield function in Chaharmahal va Bakhtiari province, Iran

    International Nuclear Information System (INIS)

    Ghatrehsamani, Shirin; Ebrahimi, Rahim; Kazi, Salim Newaz; Badarudin Badry, Ahmad; Sadeghinezhad, Emad

    2016-01-01

    The aim of this study was to determine the amount of input–output energy used in peach production and to develop an optimal model of production in Chaharmahal va Bakhtiari province, Iran. Data were collected from 100 producers by administering a questionnaire in face-to-face interviews. Farms were selected based on random sampling method. Results revealed that the total energy of production is 47,951.52 MJ/ha and the highest share of energy consumption belongs to chemical fertilizers (35.37%). Consumption of direct energy was 47.4% while indirect energy was 52.6%. Also, Total energy consumption was divided into two groups; renewable and non-renewable (19.2% and 80.8% respectively). Energy use efficiency, Energy productivity, Specific energy and Net energy were calculated as 0.433, 0.228 (kg/MJ), 4.38 (MJ/kg) and −27,161.722 (MJ/ha), respectively. According to the negative sign for Net energy, if special strategy is used, energy dismiss will decrease and negative effect of some parameters could be omitted. In the present case the amount is indicating decimate of production energy. In addition, energy efficiency was not high enough. Some of the input energies were applied to machinery, chemical fertilizer, water irrigation and electricity which had significant effect on increasing production and MPP (marginal physical productivity) was determined for variables. This parameter was positive for energy groups namely; machinery, diesel fuel, chemical fertilizer, water irrigation and electricity while it was negative for other kind of energy such as chemical pesticides and human labor. Finally, there is a need to pursue a new policy to force producers to undertake energy-efficient practices to establish sustainable production systems without disrupting the natural resources. In addition, extension activities are needed to improve the efficiency of energy consumption and to sustain the natural resources. - Highlights: • Replacing non-renewable energy with renewable

  15. Development of MIDAS/SMR Input Deck for SMART

    International Nuclear Information System (INIS)

    Cho, S. W.; Oh, H. K.; Lee, J. M.; Lee, J. H.; Yoo, K. J.; Kwun, S. K.; Hur, H.

    2010-01-01

    The objective of this study is to develop MIDAS/SMR code basic input deck for the severe accidents by simulating the steady state for the SMART plant. SMART plant is an integrated reactor developed by KAERI. For the assessment of reactor safety and severe accident management strategy, it is necessary to simulate severe accidents using the MIDAS/SMR code which is being developed by KAERI. The input deck of the MIDAS/SMR code for the SMART plant is prepared to simulate severe accident sequences for the users who are not familiar with the code. A steady state is obtained and the results are compared with design values. The input deck will be improved through the simulation of the DBAs and severe accidents. The base input deck of the MIDAS/SMR code can be used to simulate severe accident scenarios after improvement. Source terms and hydrogen generation can be analyzed through the simulation of the severe accident. The information gained from analyses of severe accidents is expected to be helpful to develop the severe accident management strategy

  16. Input price risk and optimal timing of energy investment: choice between fossil- and biofuels

    Energy Technology Data Exchange (ETDEWEB)

    Murto, Pauli; Nese, Gjermund

    2002-05-01

    We consider energy investment, when a choice has to be made between fossil fuel and biomass fired production technologies. A dynamic model is presented to illustrate the effect of the different degrees of input price uncertainty on the choice of technology and the timing of the investment. It is shown that when the choice of technology is irreversible, it may be optimal to postpone the investment even if it would otherwise be optimal to invest in one or both of the plant types. We provide a numerical example based on cost, estimates of two different power plant types. (author)

  17. Input price risk and optimal timing of energy investment: choice between fossil- and biofuels

    International Nuclear Information System (INIS)

    Murto, Pauli; Nese, Gjermund

    2002-01-01

    We consider energy investment, when a choice has to be made between fossil fuel and biomass fired production technologies. A dynamic model is presented to illustrate the effect of the different degrees of input price uncertainty on the choice of technology and the timing of the investment. It is shown that when the choice of technology is irreversible, it may be optimal to postpone the investment even if it would otherwise be optimal to invest in one or both of the plant types. We provide a numerical example based on cost, estimates of two different power plant types. (author)

  18. Identifying Inputs to Leadership Development within an Interdisciplinary Leadership Minor

    Science.gov (United States)

    McKim, Aaron J.; Sorensen, Tyson J.; Velez, Jonathan J.

    2015-01-01

    Researchers conducted a qualitative analysis of students' experiences while enrolled in an interdisciplinary leadership minor with the intent to determine programmatic inputs that spur leadership development. Based on students' reflections, three domains of programmatic inputs for leadership development within the minor were identified. These…

  19. Dynamic PET of human liver inflammation: impact of kinetic modeling with optimization-derived dual-blood input function.

    Science.gov (United States)

    Wang, Guobao; Corwin, Michael T; Olson, Kristin A; Badawi, Ramsey D; Sarkar, Souvik

    2018-05-30

    The hallmark of nonalcoholic steatohepatitis is hepatocellular inflammation and injury in the setting of hepatic steatosis. Recent work has indicated that dynamic 18F-FDG PET with kinetic modeling has the potential to assess hepatic inflammation noninvasively, while static FDG-PET did not show a promise. Because the liver has dual blood supplies, kinetic modeling of dynamic liver PET data is challenging in human studies. The objective of this study is to evaluate and identify a dual-input kinetic modeling approach for dynamic FDG-PET of human liver inflammation. Fourteen human patients with nonalcoholic fatty liver disease were included in the study. Each patient underwent one-hour dynamic FDG-PET/CT scan and had liver biopsy within six weeks. Three models were tested for kinetic analysis: traditional two-tissue compartmental model with an image-derived single-blood input function (SBIF), model with population-based dual-blood input function (DBIF), and modified model with optimization-derived DBIF through a joint estimation framework. The three models were compared using Akaike information criterion (AIC), F test and histopathologic inflammation reference. The results showed that the optimization-derived DBIF model improved the fitting of liver time activity curves and achieved lower AIC values and higher F values than the SBIF and population-based DBIF models in all patients. The optimization-derived model significantly increased FDG K1 estimates by 101% and 27% as compared with traditional SBIF and population-based DBIF. K1 by the optimization-derived model was significantly associated with histopathologic grades of liver inflammation while the other two models did not provide a statistical significance. In conclusion, modeling of DBIF is critical for kinetic analysis of dynamic liver FDG-PET data in human studies. The optimization-derived DBIF model is more appropriate than SBIF and population-based DBIF for dynamic FDG-PET of liver inflammation. © 2018

  20. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    Science.gov (United States)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  1. H∞ memory feedback control with input limitation minimization for offshore jacket platform stabilization

    Science.gov (United States)

    Yang, Jia Sheng

    2018-06-01

    In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.

  2. Imported Input Varieties and Product Innovation : Evidence from Five Developing Countries

    NARCIS (Netherlands)

    Bos, Marijke; Vannoorenberghe, Gonzague

    We examine how access to imported intermediate inputs affects firm-level product innovation in five developing counties. We combine trade data with survey data on innovation and develop a method to determine whether new inputs were essential for the product innovation. We find evidence that the

  3. F-18 High Alpha Research Vehicle (HARV) parameter identification flight test maneuvers for optimal input design validation and lateral control effectiveness

    Science.gov (United States)

    Morelli, Eugene A.

    1995-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.

  4. Investigation, development and application of optimal output feedback theory. Volume 2: Development of an optimal, limited state feedback outer-loop digital flight control system for 3-D terminal area operation

    Science.gov (United States)

    Broussard, J. R.; Halyo, N.

    1984-01-01

    This report contains the development of a digital outer-loop three dimensional radio navigation (3-D RNAV) flight control system for a small commercial jet transport. The outer-loop control system is designed using optimal stochastic limited state feedback techniques. Options investigated using the optimal limited state feedback approach include integrated versus hierarchical control loop designs, 20 samples per second versus 5 samples per second outer-loop operation and alternative Type 1 integration command errors. Command generator tracking techniques used in the digital control design enable the jet transport to automatically track arbitrary curved flight paths generated by waypoints. The performance of the design is demonstrated using detailed nonlinear aircraft simulations in the terminal area, frequency domain multi-input sigma plots, frequency domain single-input Bode plots and closed-loop poles. The response of the system to a severe wind shear during a landing approach is also presented.

  5. Input vector optimization of feed-forward neural networks for fitting ab initio potential-energy databases

    Science.gov (United States)

    Malshe, M.; Raff, L. M.; Hagan, M.; Bukkapatnam, S.; Komanduri, R.

    2010-05-01

    to permit error minimization with respect to n as well as the weights and biases of the NN, the optimum powers were all found to lie in the range of 1.625-2.38 for the four systems studied. No statistically significant increase in fitting accuracy was achieved for vinyl bromide when a different value of n was employed and optimized for each bond type. The rate of change in the fitting error with n is found to be very small when n is near its optimum value. Consequently, good fitting accuracy can be achieved by employing a value of n in the middle of the above range. The use of interparticle distances as elements of the input vector rather than the Z-matrix variables employed in the electronic structure calculations is found to reduce the rms fitting errors by factors of 8.86 and 1.67 for Si5 and vinyl bromide, respectively. If the interparticle distances are replaced with input elements of the form Rij-n with n optimized, further reductions in the rms error by a factor of 1.31 to 2.83 for the four systems investigated are obtained. A major advantage of using this procedure to increase NN fitting accuracy rather than increasing the number of neurons or the size of the database is that the required increase in computational effort is very small.

  6. Application and optimization of input parameter spaces in mass flow modelling: a case study with r.randomwalk and r.ranger

    Science.gov (United States)

    Krenn, Julia; Zangerl, Christian; Mergili, Martin

    2017-04-01

    r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This

  7. Development of NUPREP PC Version and Input Structures for NUCIRC Single Channel Analyses

    International Nuclear Information System (INIS)

    Yoon, Churl; Jun, Ji Su; Park, Joo Hwan

    2007-12-01

    The input file for a steady-state thermal-hydraulic code NUCIRC consists of common channel input data and specific channel input data in a case of single channel analysis. Even when all the data is ready for the 380 channels' single channel analyses, it takes long time and requires enormous effort to compose an input file by hand-editing. The automatic pre-processor for this tedious job is a NUPREP code. In this study, a NUPREP PC version has been developed from the source list in the program manual of NUCIRC-MOD2.000 that is imported in a form of an execution file. In this procedure, some errors found in PC executions and lost statements are fixed accordingly. It is confirmed that the developed NUPREP code produces input file correctly for the CANDU-6 single channel analysis. Additionally, the NUCIRC input structure and data format are summarized for a single channel analysis and the input CARDs required for the creep information of aged channels are listed

  8. Development of a fuzzy logic method to build objective functions in optimization problems: application to BWR fuel lattice design

    International Nuclear Information System (INIS)

    Martin-del-Campo, C.; Francois, J.L.; Barragan, A.M.; Palomera, M.A.

    2005-01-01

    In this paper we develop a methodology based on the use of the Fuzzy Logic technique to build multi-objective functions to be used in optimization processes applied to in-core nuclear fuel management. As an example, we selected the problem of determining optimal radial fuel enrichment and gadolinia distributions in a typical 'Boiling Water Reactor (BWR)' fuel lattice. The methodology is based on the use of the mathematical capability of Fuzzy Logic to model nonlinear functions of arbitrary complexity. The utility of Fuzzy Logic is to map an input space into an output space, and the primary mechanism for doing this is a list of if-then statements called rules. The rules refer to variables and adjectives that describe those variables and, the Fuzzy Logic technique interprets the values in the input vectors and, based on the set of rules assigns values to the output vector. The methodology was developed for the radial optimization of a BWR lattice where the optimization algorithm employed is Tabu Search. The global objective is to find the optimal distribution of enrichments and burnable poison concentrations in a 10*10 BWR lattice. In order to do that, a fuzzy control inference system was developed using the Fuzzy Logic Toolbox of Matlab and it has been linked to the Tabu Search optimization process. Results show that Tabu Search combined with Fuzzy Logic performs very well, obtaining lattices with optimal fuel utilization. (authors)

  9. Brain Emotional Learning Based Intelligent Decoupler for Nonlinear Multi-Input Multi-Output Distillation Columns

    Directory of Open Access Journals (Sweden)

    M. H. El-Saify

    2017-01-01

    Full Text Available The distillation process is vital in many fields of chemical industries, such as the two-coupled distillation columns that are usually highly nonlinear Multi-Input Multi-Output (MIMO coupled processes. The control of MIMO process is usually implemented via a decentralized approach using a set of Single-Input Single-Output (SISO loop controllers. Decoupling the MIMO process into group of single loops requires proper input-output pairing and development of decoupling compensator unit. This paper proposes a novel intelligent decoupling approach for MIMO processes based on new MIMO brain emotional learning architecture. A MIMO architecture of Brain Emotional Learning Based Intelligent Controller (BELBIC is developed and applied as a decoupler for 4 input/4 output highly nonlinear coupled distillation columns process. Moreover, the performance of the proposed Brain Emotional Learning Based Intelligent Decoupler (BELBID is enhanced using Particle Swarm Optimization (PSO technique. The performance is compared with the PSO optimized steady state decoupling compensation matrix. Mathematical models of the distillation columns and the decouplers are built and tested in simulation environment by applying the same inputs. The results prove remarkable success of the BELBID in minimizing the loops interactions without degrading the output that every input has been paired with.

  10. Development of NUPREP PC Version and Input Structures for NUCIRC Single Channel Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Churl; Jun, Ji Su; Park, Joo Hwan

    2007-12-15

    The input file for a steady-state thermal-hydraulic code NUCIRC consists of common channel input data and specific channel input data in a case of single channel analysis. Even when all the data is ready for the 380 channels' single channel analyses, it takes long time and requires enormous effort to compose an input file by hand-editing. The automatic pre-processor for this tedious job is a NUPREP code. In this study, a NUPREP PC version has been developed from the source list in the program manual of NUCIRC-MOD2.000 that is imported in a form of an execution file. In this procedure, some errors found in PC executions and lost statements are fixed accordingly. It is confirmed that the developed NUPREP code produces input file correctly for the CANDU-6 single channel analysis. Additionally, the NUCIRC input structure and data format are summarized for a single channel analysis and the input CARDs required for the creep information of aged channels are listed.

  11. Development the interface system of master-slave manipulator and external input device on the graphic simulator

    International Nuclear Information System (INIS)

    Song, T. J.; Lee, J. Y.; Kim, S. H.; Yoon, J. S.

    2002-01-01

    The master-slave manipulator is the generally used as remote handling device in the hot cell, in which the high level radioactive materials such as spent fuels are handled. To analyze the motion of remote handling device and to simulate the remote handling operation task in the hot cell, the 3D graphic simulator which has been installed the master-slave manipulator is established. Also the interface program of external input device with 6 DOF(degree of Freedom) is developed and connected to graphic simulator with LLTI(Low Level Tele-operation Interface) which provides a uniquely optimized, high speed, bidirectional communication interface to one or more of system and processes

  12. Development the Controller Input Power of Peripheral Interfacing Controller Using Other Micro controller

    International Nuclear Information System (INIS)

    Syirrazie Che Soh; Harzawardi Hashim; Nor Arymaswati Abdullah; Nur Aira Abdul Rahman; Mohd Ashhar Khalid

    2011-01-01

    This Controller Input Power of a Peripheral Interfacing Controller was developed using the other micro controller. This paper discuss the switching technique are practiced using proper electronic device to develop the controller, thus enable to control the input power of a PIC in order to expand their interfacing capacity and control. This may allow the PIC could be used to acquire input and control output signal from electronic and electromechanical device and instrument as well as software in wide scale and application. (author)

  13. User input in iterative design for prevention product development: leveraging interdisciplinary methods to optimize effectiveness.

    Science.gov (United States)

    Guthrie, Kate M; Rosen, Rochelle K; Vargas, Sara E; Guillen, Melissa; Steger, Arielle L; Getz, Melissa L; Smith, Kelley A; Ramirez, Jaime J; Kojic, Erna M

    2017-10-01

    The development of HIV-preventive topical vaginal microbicides has been challenged by a lack of sufficient adherence in later stage clinical trials to confidently evaluate effectiveness. This dilemma has highlighted the need to integrate translational research earlier in the drug development process, essentially applying behavioral science to facilitate the advances of basic science with respect to the uptake and use of biomedical prevention technologies. In the last several years, there has been an increasing recognition that the user experience, specifically the sensory experience, as well as the role of meaning-making elicited by those sensations, may play a more substantive role than previously thought. Importantly, the role of the user-their sensory perceptions, their judgements of those experiences, and their willingness to use a product-is critical in product uptake and consistent use post-marketing, ultimately realizing gains in global public health. Specifically, a successful prevention product requires an efficacious drug, an efficient drug delivery system, and an effective user. We present an integrated iterative drug development and user experience evaluation method to illustrate how user-centered formulation design can be iterated from the early stages of preclinical development to leverage the user experience. Integrating the user and their product experiences into the formulation design process may help optimize both the efficiency of drug delivery and the effectiveness of the user.

  14. Off-line learning from clustered input examples

    NARCIS (Netherlands)

    Marangi, Carmela; Solla, Sara A.; Biehl, Michael; Riegler, Peter; Marinaro, Maria; Tagliaferri, Roberto

    1996-01-01

    We analyze the generalization ability of a simple perceptron acting on a structured input distribution for the simple case of two clusters of input data and a linearly separable rule. The generalization ability computed for three learning scenarios: maximal stability, Gibbs, and optimal learning, is

  15. Development of the RETRAN input model for Ulchin 3/4 visual system analyzer

    International Nuclear Information System (INIS)

    Lee, S. W.; Kim, K. D.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Jeong, J. J.; Hwang, M. K.

    2004-01-01

    As a part of the Long-Term Nuclear R and D program, KAERI has developed the so-called Visual System Analyzer (ViSA) based on best-estimate codes. The MARS and RETRAN codes are used as the best-estimate codes for ViSA. Between these two codes, the RETRAN code is used for realistic analysis of Non-LOCA transients and small-break loss-of-coolant accidents, of which break size is less than 3 inch diameter. So it is necessary to develop the RETRAN input model for Ulchin 3/4 plants (KSNP). In recognition of this, the RETRAN input model for Ulchin 3/4 plants has been developed. This report includes the input model requirements and the calculation note for the input data generation (see the Appendix). In order to confirm the validity of the input data, the calculations are performed for a steady state at 100 % power operation condition, inadvertent reactor trip and RCP trip. The results of the steady-state calculation agree well with the design data. The results of the other transient calculations seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the RETRAN input data can be used as a base input deck for the RETRAN transient analyzer for Ulchin 3/4. Moreover, it is found that Core Protection Calculator (CPC) module, which is modified by Korea Electric Power Research Institute (KEPRI), is well adapted to ViSA

  16. A software development for establishing optimal production lots and its application in academic and business environments

    Directory of Open Access Journals (Sweden)

    Javier Valencia Mendez

    2014-09-01

    Full Text Available The recent global economic downturn has increased an already perceived need in organizations for cost savings. To cope with such need, companies can opt for different strategies. This paper focuses on optimizing processes and, more specifically, determining the optimal lot production. To determine the optimal lot of a specific production process, a new software was developed that not only incorporates various productive and logistical elements in its calculations but also affords users a practical way to manage the large number of input parameters required to determine the optimal batch. The developed software has not only been validated by several companies, both Spanish and Mexican, who achieved significant savings, but also used as a teaching tool in universities with highly satisfactory results from the point of view of student learning. A special contribution of this work is that the developed tool can be sent to the interested reader free of charge upon request.

  17. Categorical Inputs, Sensitivity Analysis, Optimization and Importance Tempering with tgp Version 2, an R Package for Treed Gaussian Process Models

    Directory of Open Access Journals (Sweden)

    Robert B. Gramacy

    2010-02-01

    Full Text Available This document describes the new features in version 2.x of the tgp package for R, implementing treed Gaussian process (GP models. The topics covered include methods for dealing with categorical inputs and excluding inputs from the tree or GP part of the model; fully Bayesian sensitivity analysis for inputs/covariates; sequential optimization of black-box functions; and a new Monte Carlo method for inference in multi-modal posterior distributions that combines simulated tempering and importance sampling. These additions extend the functionality of tgp across all models in the hierarchy: from Bayesian linear models, to classification and regression trees (CART, to treed Gaussian processes with jumps to the limiting linear model. It is assumed that the reader is familiar with the baseline functionality of the package, outlined in the first vignette (Gramacy 2007.

  18. Development and validation of gui based input file generation code for relap

    International Nuclear Information System (INIS)

    Anwar, M.M.; Khan, A.A.; Chughati, I.R.; Chaudri, K.S.; Inyat, M.H.; Hayat, T.

    2009-01-01

    Reactor Excursion and Leak Analysis Program (RELAP) is a widely acceptable computer code for thermal hydraulics modeling of Nuclear Power Plants. It calculates thermal- hydraulic transients in water-cooled nuclear reactors by solving approximations to the one-dimensional, two-phase equations of hydraulics in an arbitrarily connected system of nodes. However, the preparation of input file and subsequent analysis of results in this code is a tedious task. The development of a Graphical User Interface (GUI) for preparation of the input file for RELAP-5 is done with the validation of GUI generated Input File. The GUI is developed in Microsoft Visual Studio using Visual C Sharp (C) as programming language. The Nodalization diagram is drawn graphically and the program contains various component forms along with the starting data form, which are launched for properties assignment to generate Input File Cards serving as GUI for the user. The GUI is provided with Open / Save function to store and recall the Nodalization diagram along with Components' properties. The GUI generated Input File is validated for several case studies and individual component cards are compared with the originally required format. The generated Input File of RELAP is found consistent with the requirement of RELAP. The GUI provided a useful platform for simulating complex hydrodynamic problems efficiently with RELAP. (author)

  19. Development of an optimized procedure bridging design and structural analysis codes for the automatized design of the SMART

    International Nuclear Information System (INIS)

    Kim, Tae Wan; Park, Keun Bae; Choi, Suhn; Kim, Kang Soo; Jeong, Kyeong Hoon; Lee, Gyu Mahn

    1998-09-01

    In this report, an optimized design and analysis procedure is established to apply to the SMART (System-integrated Modular Advanced ReacTor) development. The development of an optimized procedure is to minimize the time consumption and engineering effort by squeezing the design and feedback interactions. To achieve this goal, the data and information generated through the design development should be directly transferred to the analysis program with minimum operation. The verification of the design concept requires considerable effort since the communication between the design and analysis involves time consuming stage for the conversion of input information. In this report, an optimized procedure is established bridging the design and analysis stage utilizing the IDEAS, ABAQUS and ANSYS. (author). 3 refs., 2 tabs., 5 figs

  20. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmet Demir

    2017-01-01

    Full Text Available In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an important role on providing software related techniques to improve the associated literature. Today, intelligent optimization techniques based on Artificial Intelligence are widely used for optimization problems. The objective of this paper is to provide a comparative study on the employment of classical optimization solutions and Artificial Intelligence solutions for enabling readers to have idea about the potential of intelligent optimization techniques. At this point, two recently developed intelligent optimization algorithms, Vortex Optimization Algorithm (VOA and Cognitive Development Optimization Algorithm (CoDOA, have been used to solve some multidisciplinary optimization problems provided in the source book Thomas' Calculus 11th Edition and the obtained results have compared with classical optimization solutions. 

  1. Phasing Out a Polluting Input

    OpenAIRE

    Eriksson, Clas

    2015-01-01

    This paper explores economic policies related to the potential conflict between economic growth and the environment. It applies a model with directed technological change and focuses on the case with low elasticity of substitution between clean and dirty inputs in production. New technology is substituted for the polluting input, which results in a gradual decline in pollution along the optimal long-run growth path. In contrast to some recent work, the era of pollution and environmental polic...

  2. Development of the MARS input model for Kori nuclear units 1 transient analyzer

    International Nuclear Information System (INIS)

    Hwang, M.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Jeong, J. J.

    2004-11-01

    KAERI has been developing the 'NSSS transient analyzer' based on best-estimate codes for Kori Nuclear Units 1 plants. The MARS and RETRAN codes have been used as the best-estimate codes for the NSSS transient analyzer. Among these codes, the MARS code is adopted for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. So it is necessary to develop the MARS input model for Kori Nuclear Units 1 plants. This report includes the input model (hydrodynamic component and heat structure models) requirements and the calculation note for the MARS input data generation for Kori Nuclear Units 1 plant analyzer (see the Appendix). In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Kori Nuclear Units 1

  3. Adjoint-based Mesh Optimization Method: The Development and Application for Nuclear Fuel Analysis

    International Nuclear Information System (INIS)

    Son, Seongmin; Lee, Jeong Ik

    2016-01-01

    In this research, methods for optimizing mesh distribution is proposed. The proposed method uses adjoint base optimization method (adjoint method). The optimized result will be obtained by applying this meshing technique to the existing code input deck and will be compared to the results produced from the uniform meshing method. Numerical solutions are calculated form an in-house 1D Finite Difference Method code while neglecting the axial conduction. The fuel radial node optimization was first performed to match the Fuel Centerline Temperature (FCT) the best. This was followed by optimizing the axial node which the Peak Cladding Temperature (PCT) is matched the best. After obtaining the optimized radial and axial nodes, the nodalization is implemented into the system analysis code and transient analyses were performed to observe the optimum nodalization performance. The developed adjoint-based mesh optimization method in the study is applied to MARS-KS, which is a nuclear system analysis code. Results show that the newly established method yields better results than that of the uniform meshing method from the numerical point of view. It is again stressed that the optimized mesh for the steady state can also give better numerical results even during a transient analysis

  4. Design and development of cell queuing, processing, and scheduling modules for the iPOINT input-buffered ATM testbed

    Science.gov (United States)

    Duan, Haoran

    1997-12-01

    heuristic strategy that leads to 'socially optimal' solutions, yielding a maximum number of contention-free cells being scheduled. A novel mixed digital-analog circuit has been designed to implement the MUCS core functionality. The MUCS circuit maps the cell scheduling computation to the capacitor charging and discharging procedures that are conducted fully in parallel. The design has a uniform circuit structure, low interconnect counts, and low chip I/O counts. Using 2 μm CMOS technology, the design operates on a 100 MHz clock and finds a near-optimal solution within a linear processing time. The circuit has been verified at the transistor level by HSPICE simulation. During this research, a five-port IQ-based optoelectronic iPOINT ATM switch has been developed and demonstrated. It has been fully functional with an aggregate throughput of 800 Mb/s. The second-generation IQ-based switch is currently under development. Equipped with iiQueue modules and MUCS module, the new switch system will deliver a multi-gigabit aggregate throughput, eliminate HOL blocking, provide per-VC QoS, and achieve near-100% link bandwidth utilization. Complete documentation of input modules and trunk module for the existing testbed, and complete documentation of 3DQ, iiQueue, and MUCS for the second-generation testbed are given in this dissertation.

  5. Thermoeconomic optimization of small size central air conditioner

    International Nuclear Information System (INIS)

    Zhang, G.Q.; Wang, L.; Liu, L.; Wang, Z.

    2004-01-01

    The application of thermoeconomic optimization design in an air-conditioning system is important in achieving economical life cycle cost. Previous work on thermoeconomic optimization mainly focused on directly calculating exergy input into the system. However, it is usually difficult to do so because of the uncertainty of input power of fan on the air side of the heat-exchanger and that of pump in the system. This paper introduces a new concept that exergy input into the system can be substituted for the sum of exergy destruction and exergy output from the system according to conservation of exergy. Although it is also difficult for a large-scale system to calculate exergy destruction, it is feasible to do so for a small-scale system, for instance, villa air conditioner (VAC). In order to perform thermoeconomic optimization, a program is firstly developed to evaluate the thermodynamic property of HFC134a on the basis of Martin-Hou state equation. Authors develop thermodynamic and thermoeconomic objective functions based on second law and thermoeconomic analysis of VAC system. Two optimization results are obtained. The design of VAC only aimed at decreasing the energy consumption is not comprehensive. Life cycle cost at thermoeconomic optimization is lower than that at thermodynamic optimization

  6. Optimal control methods for rapidly time-varying Hamiltonians

    International Nuclear Information System (INIS)

    Motzoi, F.; Merkel, S. T.; Wilhelm, F. K.; Gambetta, J. M.

    2011-01-01

    In this article, we develop a numerical method to find optimal control pulses that accounts for the separation of timescales between the variation of the input control fields and the applied Hamiltonian. In traditional numerical optimization methods, these timescales are treated as being the same. While this approximation has had much success, in applications where the input controls are filtered substantially or mixed with a fast carrier, the resulting optimized pulses have little relation to the applied physical fields. Our technique remains numerically efficient in that the dimension of our search space is only dependent on the variation of the input control fields, while our simulation of the quantum evolution is accurate on the timescale of the fast variation in the applied Hamiltonian.

  7. Development of the MARS input model for Ulchin 1/2 transient analyzer

    International Nuclear Information System (INIS)

    Jeong, J. J.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Chung, B. D.; Hwang, M.

    2003-03-01

    KAERI has been developing the NSSS transient analyzer based on best-estimate codes for Ulchin 1/2 plants. The MARS and RETRAN code are used as the best-estimate codes for the NSSS transient analyzer. Among the two codes, the MARS code is to be used for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. This report includes the input model requirements and the calculation note for the Ulchin 1/2 MARS input data generation (see the Appendix). In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Ulchin 1/2

  8. Development of the MARS input model for Ulchin 3/4 transient analyzer

    International Nuclear Information System (INIS)

    Jeong, J. J.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Hwang, M. G.

    2003-12-01

    KAERI has been developing the NSSS transient analyzer based on best-estimate codes.The MARS and RETRAN code are adopted as the best-estimate codes for the NSSS transient analyzer. Among these two codes, the MARS code is to be used for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. This report includes the MARS input model requirements and the calculation note for the MARS input data generation (see the Appendix) for Ulchin 3/4 plant analyzer. In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Ulchin 3/4

  9. Fuzzy logic control and optimization system

    Science.gov (United States)

    Lou, Xinsheng [West Hartford, CT

    2012-04-17

    A control system (300) for optimizing a power plant includes a chemical loop having an input for receiving an input signal (369) and an output for outputting an output signal (367), and a hierarchical fuzzy control system (400) operably connected to the chemical loop. The hierarchical fuzzy control system (400) includes a plurality of fuzzy controllers (330). The hierarchical fuzzy control system (400) receives the output signal (367), optimizes the input signal (369) based on the received output signal (367), and outputs an optimized input signal (369) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  10. Development and application of computer assisted optimal method for treatment of femoral neck fracture.

    Science.gov (United States)

    Wang, Monan; Zhang, Kai; Yang, Ning

    2018-04-09

    To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.

  11. Development of Input/Output System for the Reactor Transient Analysis System (RETAS)

    International Nuclear Information System (INIS)

    Suh, Jae Seung; Kang, Doo Hyuk; Cho, Yeon Sik; Ahn, Seung Hoon; Cho, Yong Jin

    2009-01-01

    A Korea Institute of Nuclear Safety Reactor Transient Analysis System (KINS-RETAS) aims at providing a realistic prediction of core and RCS response to the potential or actual event scenarios in Korean nuclear power plants (NPPs). A thermal hydraulic system code MARS is a pivot code of the RETAS, and used to predict thermal hydraulic (TH) behaviors in the core and associated systems. MARS alone can be applied to many types of transients, but is sometimes coupled with the other codes developed for different objectives. Many tools have been developed to aid users in preparing input and displaying the transient information and output data. Output file and Graphical User Interfaces (GUI) that help prepare input decks, as seen in SNAP (Gitnick, 1998), VISA (K.D. Kim, 2007) and display aids include the eFAST (KINS, 2007). The tools listed above are graphical interfaces. The input deck builders allow the user to create a functional diagram of the plant, pictorially on the screen. The functional diagram, when annotated with control volume and junction numbers, is a nodalization diagram. Data required for an input deck is entered for volumes and junctions through a mouse-driven menu and pop-up dialog; after the information is complete, an input deck is generated. Display GUIs show data from MARS calculations, either during or after the transient. The RETAS requires the user to first generate a set of 'input', two dimensional pictures of the plant on which some of the data is displayed either numerically or with a color map. The RETAS can generate XY-plots of the data. Time histories of plant conditions can be seen via the plots or through the RETAS's replay mode. The user input was combined with design input from MARS developers and experts from both the GUI and ergonomics fields. A partial list of capabilities follows. - 3D display for neutronics. - Easier method (less user time and effort) to generate 'input' for the 3D displays. - Detailed view of data at volume or

  12. Development of Input/Output System for the Reactor Transient Analysis System (RETAS)

    Energy Technology Data Exchange (ETDEWEB)

    Suh, Jae Seung; Kang, Doo Hyuk; Cho, Yeon Sik [ENESYS, Daejeon (Korea, Republic of); Ahn, Seung Hoon; Cho, Yong Jin [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2009-05-15

    A Korea Institute of Nuclear Safety Reactor Transient Analysis System (KINS-RETAS) aims at providing a realistic prediction of core and RCS response to the potential or actual event scenarios in Korean nuclear power plants (NPPs). A thermal hydraulic system code MARS is a pivot code of the RETAS, and used to predict thermal hydraulic (TH) behaviors in the core and associated systems. MARS alone can be applied to many types of transients, but is sometimes coupled with the other codes developed for different objectives. Many tools have been developed to aid users in preparing input and displaying the transient information and output data. Output file and Graphical User Interfaces (GUI) that help prepare input decks, as seen in SNAP (Gitnick, 1998), VISA (K.D. Kim, 2007) and display aids include the eFAST (KINS, 2007). The tools listed above are graphical interfaces. The input deck builders allow the user to create a functional diagram of the plant, pictorially on the screen. The functional diagram, when annotated with control volume and junction numbers, is a nodalization diagram. Data required for an input deck is entered for volumes and junctions through a mouse-driven menu and pop-up dialog; after the information is complete, an input deck is generated. Display GUIs show data from MARS calculations, either during or after the transient. The RETAS requires the user to first generate a set of 'input', two dimensional pictures of the plant on which some of the data is displayed either numerically or with a color map. The RETAS can generate XY-plots of the data. Time histories of plant conditions can be seen via the plots or through the RETAS's replay mode. The user input was combined with design input from MARS developers and experts from both the GUI and ergonomics fields. A partial list of capabilities follows. - 3D display for neutronics. - Easier method (less user time and effort) to generate 'input' for the 3D displays. - Detailed view

  13. Development of the GUI environments of MIDAS code for convenient input and output processing

    International Nuclear Information System (INIS)

    Kim, K. L.; Kim, D. H.

    2003-01-01

    MIDAS is being developed at KAERI as an integrated Severe Accident Analysis Code with easy model modification and addition by restructuring the data transfer scheme. In this paper, the input file management system, IEDIT and graphic simulation system, SATS, are presented as MIDAS input and output GUI systems. These two systems would form the basis of the MIDAS GUI system for input and output processing, and they are expected to be useful tools for severe accidents analysis and simulation

  14. Development and operation of K-URT data input system

    International Nuclear Information System (INIS)

    Kim, Yun Jae; Myoung, Noh Hoon; Kim, Jong Hyun; Han, Jae Jun

    2010-05-01

    Activities for TSPA(Total System Performance Assessment) on the permanent disposal of high level radioactive waste includes production of input data, safety assessment using input data, license procedure and others. These activities are performed in 5 steps as follows; (1) Adequate planning, (2) Controlled execution, (3) Complete documentation, (4) Thorough review, (5) Independent oversight. For the confidence building, it is very important to record and manage the materials obtained from research works in transparency. For the documentation of disposal research work from planning stage to data management stage, KAERI developed CYPRUS named CYBER R and D Platform for Radwaste Disposal in Underground System with a QA(Quality Assurance) System. In CYPRUS, QA system makes effects on other functions such as data management, project management and others. This report analyzes the structure of CYPRUS and proposes to accumulate qualified data, to provide a convenient application and to promote access and use of CYPRUS for a future-oriented system

  15. Parameter Optimization of MIMO Fuzzy Optimal Model Predictive Control By APSO

    Directory of Open Access Journals (Sweden)

    Adel Taieb

    2017-01-01

    Full Text Available This paper introduces a new development for designing a Multi-Input Multi-Output (MIMO Fuzzy Optimal Model Predictive Control (FOMPC using the Adaptive Particle Swarm Optimization (APSO algorithm. The aim of this proposed control, called FOMPC-APSO, is to develop an efficient algorithm that is able to have good performance by guaranteeing a minimal control. This is done by determining the optimal weights of the objective function. Our method is considered an optimization problem based on the APSO algorithm. The MIMO system to be controlled is modeled by a Takagi-Sugeno (TS fuzzy system whose parameters are identified using weighted recursive least squares method. The utility of the proposed controller is demonstrated by applying it to two nonlinear processes, Continuous Stirred Tank Reactor (CSTR and Tank system, where the proposed approach provides better performances compared with other methods.

  16. Design of an X-band accelerating structure using a newly developed structural optimization procedure

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Xiaoxia [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Fang, Wencheng; Gu, Qiang [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); Zhao, Zhentang, E-mail: zhaozhentang@sinap.ac.cn [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); University of Chinese Academy of Sciences, Beijing 100049 (China)

    2017-05-11

    An X-band high gradient accelerating structure is a challenging technology for implementation in advanced electron linear accelerator facilities. The present work discusses the design of an X-band accelerating structure for dedicated application to a compact hard X-ray free electron laser facility at the Shanghai Institute of Applied Physics, and numerous design optimizations are conducted with consideration for radio frequency (RF) breakdown, RF efficiency, short-range wakefields, and dipole/quadrupole field modes, to ensure good beam quality and a high accelerating gradient. The designed X-band accelerating structure is a constant gradient structure with a 4π/5 operating mode and input and output dual-feed couplers in a racetrack shape. The design process employs a newly developed effective optimization procedure for optimization of the X-band accelerating structure. In addition, the specific design of couplers providing high beam quality by eliminating dipole field components and reducing quadrupole field components is discussed in detail.

  17. Optimal application of climate data to the development of design wind speeds

    DEFF Research Database (Denmark)

    Kruger, Andries C.; Retief, Johan V.; Goliger, Adam M.

    2014-01-01

    Africa (WASA project) focuses, amongst others, on the development of a Regional Extreme Wind Climate (REWC) for South Africa. Wind farms are planned for areas with relatively strong and sustained winds, with wind turbines classed according to their suitability for different wind conditions. The REWC...... statistics are used during the construction and design phase to make assumptions about the local strong wind climate that the wind turbines will be exposed to, with the local environment and topography as additional input. The simultaneous development of the REWC and revision of the extreme wind statistics...... of South Africa created an opportunity to bring together a range of expertise that could contribute to the optimal development of design wind speed information. These include the knowledge of the statistical extraction of extreme wind observations from reanalysis and model data, the quality control...

  18. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  19. Input-output analysis of high-speed axisymmetric isothermal jet noise

    Science.gov (United States)

    Jeun, Jinah; Nichols, Joseph W.; Jovanović, Mihailo R.

    2016-04-01

    We use input-output analysis to predict and understand the aeroacoustics of high-speed isothermal turbulent jets. We consider axisymmetric linear perturbations about Reynolds-averaged Navier-Stokes solutions of ideally expanded turbulent jets with jet Mach numbers 0.6 parabolized stability equations (PSE), and this mode dominates the response. For subsonic jets, however, the singular values indicate that the contributions of sub-optimal modes to noise generation are nearly equal to that of the optimal mode, explaining why the PSE do not fully capture the far-field sound in this case. Furthermore, high-fidelity large eddy simulation (LES) is used to assess the prevalence of sub-optimal modes in the unsteady data. By projecting LES source term data onto input modes and the LES acoustic far-field onto output modes, we demonstrate that sub-optimal modes of both types are physically relevant.

  20. CBM First-level Event Selector Input Interface Demonstrator

    Science.gov (United States)

    Hutter, Dirk; de Cuveland, Jan; Lindenstruth, Volker

    2017-10-01

    CBM is a heavy-ion experiment at the future FAIR facility in Darmstadt, Germany. Featuring self-triggered front-end electronics and free-streaming read-out, event selection will exclusively be done by the First Level Event Selector (FLES). Designed as an HPC cluster with several hundred nodes its task is an online analysis and selection of the physics data at a total input data rate exceeding 1 TByte/s. To allow efficient event selection, the FLES performs timeslice building, which combines the data from all given input links to self-contained, potentially overlapping processing intervals and distributes them to compute nodes. Partitioning the input data streams into specialized containers allows performing this task very efficiently. The FLES Input Interface defines the linkage between the FEE and the FLES data transport framework. A custom FPGA PCIe board, the FLES Interface Board (FLIB), is used to receive data via optical links and transfer them via DMA to the host’s memory. The current prototype of the FLIB features a Kintex-7 FPGA and provides up to eight 10 GBit/s optical links. A custom FPGA design has been developed for this board. DMA transfers and data structures are optimized for subsequent timeslice building. Index tables generated by the FPGA enable fast random access to the written data containers. In addition the DMA target buffers can directly serve as InfiniBand RDMA source buffers without copying the data. The usage of POSIX shared memory for these buffers allows data access from multiple processes. An accompanying HDL module has been developed to integrate the FLES link into the front-end FPGA designs. It implements the front-end logic interface as well as the link protocol. Prototypes of all Input Interface components have been implemented and integrated into the FLES test framework. This allows the implementation and evaluation of the foreseen CBM read-out chain.

  1. An engineering optimization method with application to STOL-aircraft approach and landing trajectories

    Science.gov (United States)

    Jacob, H. G.

    1972-01-01

    An optimization method has been developed that computes the optimal open loop inputs for a dynamical system by observing only its output. The method reduces to static optimization by expressing the inputs as series of functions with parameters to be optimized. Since the method is not concerned with the details of the dynamical system to be optimized, it works for both linear and nonlinear systems. The method and the application to optimizing longitudinal landing paths for a STOL aircraft with an augmented wing are discussed. Noise, fuel, time, and path deviation minimizations are considered with and without angle of attack, acceleration excursion, flight path, endpoint, and other constraints.

  2. Development of a pump-turbine runner based on multiobjective optimization

    International Nuclear Information System (INIS)

    Xuhe, W; Baoshan, Z; Lei, T; Jie, Z; Shuliang, C

    2014-01-01

    As a key component of reversible pump-turbine unit, pump-turbine runner rotates at pump or turbine direction according to the demand of power grid, so higher efficiencies under both operating modes have great importance for energy saving. In the present paper, a multiobjective optimization design strategy, which includes 3D inverse design method, CFD calculations, response surface method (RSM) and multiobjective genetic algorithm (MOGA), is introduced to develop a model pump-turbine runner for middle-high head pumped storage plant. Parameters that controlling blade shape, such as blade loading and blade lean angle at high pressure side are chosen as input parameters, while runner efficiencies under both pump and turbine modes are selected as objective functions. In order to validate the availability of the optimization design system, one runner configuration from Pareto front is manufactured for experimental research. Test results show that the highest unit efficiency is 91.0% under turbine mode and 90.8% under pump mode for the designed runner, of which prototype efficiencies are 93.88% and 93.27% respectively. Viscous CFD calculations for full passage model are also conducted, which aim at finding out the hydraulic improvement from internal flow analyses

  3. Automation of Geometry Input for Building Code Compliance Check

    DEFF Research Database (Denmark)

    Petrova, Ekaterina Aleksandrova; Johansen, Peter Lind; Jensen, Rasmus Lund

    2017-01-01

    Documentation of compliance with the energy performance regulations at the end of the detailed design phase is mandatory for building owners in Denmark. Therefore, besides multidisciplinary input, the building design process requires various iterative analyses, so that the optimal solutions can....... That has left the industry in constant pursuit of possibilities for integration of the tool within the Building Information Modelling environment so that the potential provided by the latter can be harvested and the processed can be optimized. This paper presents a solution for automated data extraction...... from building geometry created in Autodesk Revit and its translation to input for compliance check analysis....

  4. The Use of Input-Output Control System Analysis for Sustainable Development of Multivariable Environmental Systems

    Science.gov (United States)

    Koliopoulos, T. C.; Koliopoulou, G.

    2007-10-01

    We present an input-output solution for simulating the associated behavior and optimized physical needs of an environmental system. The simulations and numerical analysis determined the accurate boundary loads and areas that were required to interact for the proper physical operation of a complicated environmental system. A case study was conducted to simulate the optimum balance of an environmental system based on an artificial intelligent multi-interacting input-output numerical scheme. The numerical results were focused on probable further environmental management techniques, with the objective of minimizing any risks and associated environmental impact to protect the quality of public health and the environment. Our conclusions allowed us to minimize the associated risks, focusing on probable cases in an emergency to protect the surrounded anthropogenic or natural environment. Therefore, the lining magnitude could be determined for any useful associated technical works to support the environmental system under examination, taking into account its particular boundary necessities and constraints.

  5. Development of a Math Input Interface with Flick Operation for Mobile Devices

    Science.gov (United States)

    Nakamura, Yasuyuki; Nakahara, Takahiro

    2016-01-01

    Developing online test environments for e-learning for mobile devices will be useful to increase drill practice opportunities. In order to provide a drill practice environment for calculus using an online math test system, such as STACK, we develop a flickable math input interface that can be easily used on mobile devices. The number of taps…

  6. Aerospace engineering design by systematic decomposition and multilevel optimization

    Science.gov (United States)

    Sobieszczanski-Sobieski, J.; Barthelemy, J. F. M.; Giles, G. L.

    1984-01-01

    A method for systematic analysis and optimization of large engineering systems, by decomposition of a large task into a set of smaller subtasks that is solved concurrently is described. The subtasks may be arranged in hierarchical levels. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization.

  7. Robust Optimal Adaptive Trajectory Tracking Control of Quadrotor Helicopter

    Directory of Open Access Journals (Sweden)

    M. Navabi

    Full Text Available Abstract This paper focuses on robust optimal adaptive control strategy to deal with tracking problem of a quadrotor unmanned aerial vehicle (UAV in presence of parametric uncertainties, actuator amplitude constraints, and unknown time-varying external disturbances. First, Lyapunov-based indirect adaptive controller optimized by particle swarm optimization (PSO is developed for multi-input multi-output (MIMO nonlinear quadrotor to prevent input constraints violation, and then disturbance observer-based control (DOBC technique is aggregated with the control system to attenuate the effects of disturbance generated by an exogenous system. The performance of synthesis control method is evaluated by a new performance index function in time-domain, and the stability analysis is carried out using Lyapunov theory. Finally, illustrative numerical simulations are conducted to demonstrate the effectiveness of the presented approach in altitude and attitude tracking under several conditions, including large time-varying uncertainty, exogenous disturbance, and control input constraints.

  8. Optimizing Input/Output Using Adaptive File System Policies

    Science.gov (United States)

    Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.

    1996-01-01

    Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.

  9. Optimizing production with energy and GHG emission constraints in Greece: An input-output analysis

    International Nuclear Information System (INIS)

    Hristu-Varsakelis, D.; Karagianni, S.; Pempetzoglou, M.; Sfetsos, A.

    2010-01-01

    Under its Kyoto and EU obligations, Greece has committed to a greenhouse gas (GHG) emissions increase of at most 25% compared to 1990 levels, to be achieved during the period 2008-2012. Although this restriction was initially regarded as being realistic, information derived from GHG emissions inventories shows that an increase of approximately 28% has already taken place between 1990 and 2005, highlighting the need for immediate action. This paper explores the reallocation of production in Greece, on a sector-by-sector basis, in order to meet overall demand constraints and GHG emissions targets. We pose a constrained optimization problem, taking into account the Greek environmental input-output matrix for 2005, the amount of utilized energy and pollution reduction options. We examine two scenarios, limiting fluctuations in sectoral production to at most 10% and 15%, respectively, compared to baseline (2005) values. Our results indicate that (i) GHG emissions can be reduced significantly with relatively limited effects on GVP growth rates, and that (ii) greater cutbacks in GHG emissions can be achieved as more flexible production scenarios are allowed.

  10. Distribution Development for STORM Ingestion Input Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Fulton, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    The Sandia-developed Transport of Radioactive Materials (STORM) code suite is used as part of the Radioisotope Power System Launch Safety (RPSLS) program to perform statistical modeling of the consequences due to release of radioactive material given a launch accident. As part of this modeling, STORM samples input parameters from probability distributions with some parameters treated as constants. This report described the work done to convert four of these constant inputs (Consumption Rate, Average Crop Yield, Cropland to Landuse Database Ratio, and Crop Uptake Factor) to sampled values. Consumption rate changed from a constant value of 557.68 kg / yr to a normal distribution with a mean of 102.96 kg / yr and a standard deviation of 2.65 kg / yr. Meanwhile, Average Crop Yield changed from a constant value of 3.783 kg edible / m 2 to a normal distribution with a mean of 3.23 kg edible / m 2 and a standard deviation of 0.442 kg edible / m 2 . The Cropland to Landuse Database ratio changed from a constant value of 0.0996 (9.96%) to a normal distribution with a mean value of 0.0312 (3.12%) and a standard deviation of 0.00292 (0.29%). Finally the crop uptake factor changed from a constant value of 6.37e-4 (Bq crop /kg)/(Bq soil /kg) to a lognormal distribution with a geometric mean value of 3.38e-4 (Bq crop /kg)/(Bq soil /kg) and a standard deviation value of 3.33 (Bq crop /kg)/(Bq soil /kg)

  11. Effects of the input polarization on JET polarimeter horizontal channels

    International Nuclear Information System (INIS)

    Gaudio, P.; Gelfusa, M.; Murari, A.; Orsitto, F.; Boboc, A.

    2013-01-01

    In the past, the analysis of JET polarimetry measurements were carried out only for the vertical channels using a polarimetry propagation code based on the Stokes vector formalism [1,2]. A new propagation code has been developed therefore for the horizontal chords to simulate and interpret the measurements of the Faraday rotation and Cotton–Mouton phase shift in JET. The code has been used to develop a theoretical study to the effect of the input polarization on the eventual quality of the measurements. The results allow choosing the best polarization to optimize the polarimetric measurements for the various experiments

  12. Multi-Objective Optimization for Analysis of Changing Trade-Offs in the Nepalese Water-Energy-Food Nexus with Hydropower Development

    DEFF Research Database (Denmark)

    Dhaubanjar, Sanita; Davidsen, Claus; Bauer-Gottwein, Peter

    2017-01-01

    transmission constraints using an optimal power flow approach. Basin inflows, hydropower plant specifications, reservoir characteristics, reservoir rules, irrigation water demand, environmental flow requirements, power demand, and transmission line properties are provided as model inputs. The trade......-established water and power system models to develop a decision support tool combining multiple nexus objectives in a linear objective function. To demonstrate our framework, we compare eight Nepalese power development scenarios based on five nexus objectives: minimization of power deficit, maintenance of water...... availability for irrigation to support food self-sufficiency, reduction in flood risk, maintenance of environmental flows, and maximization of power export. The deterministic multi-objective optimization model is spatially resolved to enable realistic representation of the nexus linkages and accounts for power...

  13. Development of an Input Suite for an Orthotropic Composite Material Model

    Science.gov (United States)

    Hoffarth, Canio; Shyamsunder, Loukham; Khaled, Bilal; Rajan, Subramaniam; Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Blankenhorn, Gunther

    2017-01-01

    An orthotropic three-dimensional material model suitable for use in modeling impact tests has been developed that has three major components elastic and inelastic deformations, damage and failure. The material model has been implemented as MAT213 into a special version of LS-DYNA and uses tabulated data obtained from experiments. The prominent features of the constitutive model are illustrated using a widely-used aerospace composite the T800S3900-2B[P2352W-19] BMS8-276 Rev-H-Unitape fiber resin unidirectional composite. The input for the deformation model consists of experimental data from 12 distinct experiments at a known temperature and strain rate: tension and compression along all three principal directions, shear in all three principal planes, and off axis tension or compression tests in all three principal planes, along with other material constants. There are additional input associated with the damage and failure models. The steps in using this model are illustrated composite characterization tests, verification tests and a validation test. The results show that the developed and implemented model is stable and yields acceptably accurate results.

  14. Automation of RELAP5 input calibration and code validation using genetic algorithm

    International Nuclear Information System (INIS)

    Phung, Viet-Anh; Kööp, Kaspar; Grishchenko, Dmitry; Vorobyev, Yury; Kudinov, Pavel

    2016-01-01

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  15. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  16. Alternative input medium development for wheelchair user with severe spinal cord injury

    Science.gov (United States)

    Ihsan, Izzat Aqmar; Tomari, Razali; Zakaria, Wan Nurshazwani Wan; Othman, Nurmiza

    2017-09-01

    Quadriplegia or tetraplegia patients have restricted four limbs as well as torso movement caused by severe spinal cord injury. Undoubtedly, these patients face difficulties when operating their powered electric wheelchair since they are unable to control the wheelchair by means of a standard joystick. Due to total loss of both sensory and motor function of the four limbs and torso, an alternative input medium for the wheelchair will be developed to assist the user in operating the wheelchair. In this framework, the direction of the wheelchair movement is determined by the user's conscious intent through a brain control interface (BCI) based on Electroencephalogram (EEG) signal. A laser range finder (LFR) is used to perceive environment information for determining a safety distance of the wheelchair's surrounding. Local path planning algorithm will be developed to provide navigation planner along with user's input to prevent collision during control operation.

  17. Dynamic Heat Supply Prediction Using Support Vector Regression Optimized by Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Meiping Wang

    2016-01-01

    Full Text Available We developed an effective intelligent model to predict the dynamic heat supply of heat source. A hybrid forecasting method was proposed based on support vector regression (SVR model-optimized particle swarm optimization (PSO algorithms. Due to the interaction of meteorological conditions and the heating parameters of heating system, it is extremely difficult to forecast dynamic heat supply. Firstly, the correlations among heat supply and related influencing factors in the heating system were analyzed through the correlation analysis of statistical theory. Then, the SVR model was employed to forecast dynamic heat supply. In the model, the input variables were selected based on the correlation analysis and three crucial parameters, including the penalties factor, gamma of the kernel RBF, and insensitive loss function, were optimized by PSO algorithms. The optimized SVR model was compared with the basic SVR, optimized genetic algorithm-SVR (GA-SVR, and artificial neural network (ANN through six groups of experiment data from two heat sources. The results of the correlation coefficient analysis revealed the relationship between the influencing factors and the forecasted heat supply and determined the input variables. The performance of the PSO-SVR model is superior to those of the other three models. The PSO-SVR method is statistically robust and can be applied to practical heating system.

  18. Optimization of lift gas allocation in a gas lifted oil field as non-linear optimization problem

    Directory of Open Access Journals (Sweden)

    Roshan Sharma

    2012-01-01

    Full Text Available Proper allocation and distribution of lift gas is necessary for maximizing total oil production from a field with gas lifted oil wells. When the supply of the lift gas is limited, the total available gas should be optimally distributed among the oil wells of the field such that the total production of oil from the field is maximized. This paper describes a non-linear optimization problem with constraints associated with the optimal distribution of the lift gas. A non-linear objective function is developed using a simple dynamic model of the oil field where the decision variables represent the lift gas flow rate set points of each oil well of the field. The lift gas optimization problem is solved using the emph'fmincon' solver found in MATLAB. As an alternative and for verification, hill climbing method is utilized for solving the optimization problem. Using both of these methods, it has been shown that after optimization, the total oil production is increased by about 4. For multiple oil wells sharing lift gas from a common source, a cascade control strategy along with a nonlinear steady state optimizer behaves as a self-optimizing control structure when the total supply of lift gas is assumed to be the only input disturbance present in the process. Simulation results show that repeated optimization performed after the first time optimization under the presence of the input disturbance has no effect in the total oil production.

  19. User input verification and test driven development in the NJOY21 nuclear data processing code

    Energy Technology Data Exchange (ETDEWEB)

    Trainer, Amelia Jo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Conlin, Jeremy Lloyd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); McCartney, Austin Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-21

    Before physically-meaningful data can be used in nuclear simulation codes, the data must be interpreted and manipulated by a nuclear data processing code so as to extract the relevant quantities (e.g. cross sections and angular distributions). Perhaps the most popular and widely-trusted of these processing codes is NJOY, which has been developed and improved over the course of 10 major releases since its creation at Los Alamos National Laboratory in the mid-1970’s. The current phase of NJOY development is the creation of NJOY21, which will be a vast improvement from its predecessor, NJOY2016. Designed to be fast, intuitive, accessible, and capable of handling both established and modern formats of nuclear data, NJOY21 will address many issues that many NJOY users face, while remaining functional for those who prefer the existing format. Although early in its development, NJOY21 is quickly providing input validation to check user input. By providing rapid and helpful responses to users while writing input files, NJOY21 will prove to be more intuitive and easy to use than any of its predecessors. Furthermore, during its development, NJOY21 is subject to regular testing, such that its test coverage must strictly increase with the addition of any production code. This thorough testing will allow developers and NJOY users to establish confidence in NJOY21 as it gains functionality. This document serves as a discussion regarding the current state input checking and testing practices of NJOY21.

  20. Adaptive RD Optimized Hybrid Sound Coding

    NARCIS (Netherlands)

    Schijndel, N.H. van; Bensa, J.; Christensen, M.G.; Colomes, C.; Edler, B.; Heusdens, R.; Jensen, J.; Jensen, S.H.; Kleijn, W.B.; Kot, V.; Kövesi, B.; Lindblom, J.; Massaloux, D.; Niamut, O.A.; Nordén, F.; Plasberg, J.H.; Vafin, R.; Virette, D.; Wübbolt, O.

    2008-01-01

    Traditionally, sound codecs have been developed with a particular application in mind, their performance being optimized for specific types of input signals, such as speech or audio (music), and application constraints, such as low bit rate, high quality, or low delay. There is, however, an

  1. Optimization of light quality from color mixing light-emitting diode systems for general lighting

    DEFF Research Database (Denmark)

    Thorseth, Anders

    2012-01-01

    are simulated using radiometrically measured single LED spectra. The method uses electrical input powers as input parameters and optimizes the resulting spectral power distribution with regard to color rendering index, correlated color temperature and chromaticity distance. The results indicate Pareto optimal......To address the problem of spectral light quality from color mixing light-emitting diode systems, a method for optimizing the spectral output of multicolor LED system with regards to standardized quality parameters has been developed. The composite spectral power distribution from the LEDs...

  2. Automation of Geometry Input for Building Code Compliance Check

    DEFF Research Database (Denmark)

    Petrova, Ekaterina Aleksandrova; Johansen, Peter Lind; Jensen, Rasmus Lund

    2017-01-01

    Documentation of compliance with the energy performance regulations at the end of the detailed design phase is mandatory for building owners in Denmark. Therefore, besides multidisciplinary input, the building design process requires various iterative analyses, so that the optimal solutions can...... be identified amongst multiple alternatives. However, meeting performance criteria is often associated with manual data inputs and retroactive modifications of the design. Due to poor interoperability between the authoring tools and the compliance check program, the processes are redundant and inefficient...... from building geometry created in Autodesk Revit and its translation to input for compliance check analysis....

  3. Performance optimization in electro- discharge machining using a suitable multiresponse optimization technique

    Directory of Open Access Journals (Sweden)

    I. Nayak

    2017-06-01

    Full Text Available In the present research work, four different multi response optimization techniques, viz. multiple response signal-to-noise (MRSN ratio, weighted signal-to-noise (WSN ratio, Grey relational analysis (GRA and VIKOR (VlseKriterijumska Optimizacija I Kompromisno Resenje in Serbian methods have been used to optimize the electro-discharge machining (EDM performance characteristics such as material removal rate (MRR, tool wear rate (TWR and surface roughness (SR simultaneously. Experiments have been planned on a D2 steel specimen based on L9 orthogonal array. Experimental results are analyzed using the standard procedure. The optimum level combinations of input process parameters such as voltage, current, pulse-on-time and pulse-off-time, and percentage contributions of each process parameter using ANOVA technique have been determined. Different correlations have been developed between the various input process parameters and output performance characteristics. Finally, the optimum performances of these four methods are compared and the results show that WSN ratio method is the best multiresponse optimization technique for this process. From the analysis, it is also found that the current has the maximum effect on the overall performance of EDM operation as compared to other process parameters.

  4. Optimizing Human Input in Social Network Analysis

    Science.gov (United States)

    2018-01-23

    7] T. L. Lai and H. Robbins, “Asymptotically efficient adaptive allocation rules,” Advances in applied mathematics , vol. 6, no. 1, pp. 4–22, 1985. [8...W. Whitt, “Heavy traffic limit theorems for queues: a survey,” in Mathematical Methods in Queueing Theory. Springer, 1974, pp. 307–350. [9] H...Regret lower bounds and optimal algorithms,” in Proceedings of the 2015 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer

  5. Study and development of a generalised input-output system for data base management systems

    International Nuclear Information System (INIS)

    Zidi, Noureddine

    1975-01-01

    This thesis reports a study which aimed at designing and developing a software for the management and execution of all input-output actions of data base management systems. This software is also an interface between data base management systems and the various operating systems. After a recall of general characteristics of database management systems, the author presents the previously developed GRISBI system (rational management of information stored in an integrated database), and describes difficulties faced to adapt this system to the new access method (VSAM, virtual sequential access method). This lead to the search for a more general solution, the development of which is presented in the second part of this thesis: environment of the input-output generalised system, architecture, internal specifications. The last part presents flowcharts and statements of the various routines [fr

  6. OPTIMIZATION OF THE RUSSIAN MACROECONOMIC POLICY FOR 2016-2020

    Directory of Open Access Journals (Sweden)

    Gilmundinov V. M.

    2016-12-01

    Full Text Available This paper is concerned with the methodological issues of economic policy elaboration and optimization of economic policy instruments’ parameters. Actuality of this research is provided by growing complexity of social and economic systems, important state role in their functioning as well as multi-targets of economic policy with limited number of instruments. Considering a big variety of internal and external restrictions of social and economic development of modern Russia it has wide range of applications. Extension of the dynamic econometric general equilibrium input-output model of the Russian economy with development of the sub-model of economic policy optimization is a key purpose of this study. The sub-model of economic policy optimization allows estimating impact of economic policy measures on target indicators as well as defining optimal values of their parameters. For this purpose, we extend Robert Mundell’s approach by considering dynamic optimization and wider range of economic policy targets and measures. Use of general equilibrium input-output model allows considering impact of economic policy on different aggregate markets and sectors. Approbation of suggested approach allows us to develop multi-variant forecast for the Russian economy for 2016-2020, define optimal values of monetary policy parameters and compare considered variants by values of social losses. The obtained results could be further used in theoretical as well as applied researches concerned with issues of economic policy elaboration and forecasting of social and economic development.

  7. Reinforcement-Learning-Based Robust Controller Design for Continuous-Time Uncertain Nonlinear Systems Subject to Input Constraints.

    Science.gov (United States)

    Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai

    2015-07-01

    The design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.

  8. Stochastic weather inputs for improved urban water demand forecasting: application of nonlinear input variable selection and machine learning methods

    Science.gov (United States)

    Quilty, J.; Adamowski, J. F.

    2015-12-01

    Urban water supply systems are often stressed during seasonal outdoor water use as water demands related to the climate are variable in nature making it difficult to optimize the operation of the water supply system. Urban water demand forecasts (UWD) failing to include meteorological conditions as inputs to the forecast model may produce poor forecasts as they cannot account for the increase/decrease in demand related to meteorological conditions. Meteorological records stochastically simulated into the future can be used as inputs to data-driven UWD forecasts generally resulting in improved forecast accuracy. This study aims to produce data-driven UWD forecasts for two different Canadian water utilities (Montreal and Victoria) using machine learning methods by first selecting historical UWD and meteorological records derived from a stochastic weather generator using nonlinear input variable selection. The nonlinear input variable selection methods considered in this work are derived from the concept of conditional mutual information, a nonlinear dependency measure based on (multivariate) probability density functions and accounts for relevancy, conditional relevancy, and redundancy from a potential set of input variables. The results of our study indicate that stochastic weather inputs can improve UWD forecast accuracy for the two sites considered in this work. Nonlinear input variable selection is suggested as a means to identify which meteorological conditions should be utilized in the forecast.

  9. Genetic optimization of steam multi-turbines system

    International Nuclear Information System (INIS)

    Olszewski, Pawel

    2014-01-01

    Optimization analysis of partially loaded cogeneration, multiple-stages steam turbines system was numerically investigated by using own-developed code (C++). The system can be controlled by following variables: fresh steam temperature, pressure, and flow rates through all stages in steam turbines. Five various strategies, four thermodynamics and one economical, which quantify system operation, were defined and discussed as an optimization functions. Mathematical model of steam turbines calculates steam properties according to the formulation proposed by the International Association for the Properties of Water and Steam. Genetic algorithm GENOCOP was implemented as a solving engine for non–linear problem with handling constrains. Using formulated methodology, example solution for partially loaded system, composed of five steam turbines (30 input variables) with different characteristics, was obtained for five strategies. The genetic algorithm found multiple solutions (various input parameters sets) giving similar overall results. In real application it allows for appropriate scheduling of machine operation that would affect equable time load of every system compounds. Also based on these results three strategies where chosen as the most complex: the first thermodynamic law energy and exergy efficiency maximization and total equivalent energy minimization. These strategies can be successfully used in optimization of real cogeneration applications. - Highlights: • Genetic optimization model for a set of five various steam turbines was presented. • Four various thermodynamic optimization strategies were proposed and discussed. • Operational parameters (steam pressure, temperature, flow) influence was examined. • Genetic algorithm generated optimal solutions giving the best estimators values. • It has been found that similar energy effect can be obtained for various inputs

  10. PLEXOS Input Data Generator

    Energy Technology Data Exchange (ETDEWEB)

    2017-02-01

    The PLEXOS Input Data Generator (PIDG) is a tool that enables PLEXOS users to better version their data, automate data processing, collaborate in developing inputs, and transfer data between different production cost modeling and other power systems analysis software. PIDG can process data that is in a generalized format from multiple input sources, including CSV files, PostgreSQL databases, and PSS/E .raw files and write it to an Excel file that can be imported into PLEXOS with only limited manual intervention.

  11. On optimal development and becoming an optimiser

    NARCIS (Netherlands)

    de Ruyter, D.J.

    2012-01-01

    The article aims to provide a justification for the claim that optimal development and becoming an optimiser are educational ideals that parents should pursue in raising their children. Optimal development is conceptualised as enabling children to grow into flourishing persons, that is persons who

  12. Development of an Input Model to MELCOR 1.8.5 for the Oskarshamn 3 BWR

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, Lars [Lentek, Nykoeping (Sweden)

    2006-05-15

    An input model has been prepared to the code MELCOR 1.8.5 for the Swedish Oskarshamn 3 Boiling Water Reactor (O3). This report describes the modelling work and the various files which comprise the input deck. Input data are mainly based on original drawings and system descriptions made available by courtesy of OKG AB. Comparison and check of some primary system data were made against an O3 input file to the SCDAP/RELAP5 code that was used in the SARA project. Useful information was also obtained from the FSAR (Final Safety Analysis Report) for O3 and the SKI report '2003 Stoerningshandboken BWR'. The input models the O3 reactor at its current state with the operating power of 3300 MW{sub th}. One aim with this work is that the MELCOR input could also be used for power upgrading studies. All fuel assemblies are thus assumed to consist of the new Westinghouse-Atom's SVEA-96 Optima2 fuel. MELCOR is a severe accident code developed by Sandia National Laboratory under contract from the U.S. Nuclear Regulatory Commission (NRC). MELCOR is a successor to STCP (Source Term Code Package) and has thus a long evolutionary history. The input described here is adapted to the latest version 1.8.5 available when the work began. It was released the year 2000, but a new version 1.8.6 was distributed recently. Conversion to the new version is recommended. (During the writing of this report still another code version, MELCOR 2.0, has been announced to be released within short.) In version 1.8.5 there is an option to describe the accident progression in the lower plenum and the melt-through of the reactor vessel bottom in more detail by use of the Bottom Head (BH) package developed by Oak Ridge National Laboratory especially for BWRs. This is in addition to the ordinary MELCOR COR package. Since problems arose running with the BH input two versions of the O3 input deck were produced, a NONBH and a BH deck. The BH package is no longer a separate package in the new 1

  13. MDS MIC Catalog Inputs

    Science.gov (United States)

    Johnson-Throop, Kathy A.; Vowell, C. W.; Smith, Byron; Darcy, Jeannette

    2006-01-01

    This viewgraph presentation reviews the inputs to the MDS Medical Information Communique (MIC) catalog. The purpose of the group is to provide input for updating the MDS MIC Catalog and to request that MMOP assign Action Item to other working groups and FSs to support the MITWG Process for developing MIC-DDs.

  14. Development of free-piston Stirling engine performance and optimization codes based on Martini simulation technique

    Science.gov (United States)

    Martini, William R.

    1989-01-01

    A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.

  15. Study and development of an input coupler for the future TESLA collider

    International Nuclear Information System (INIS)

    Dupery, C.

    1996-01-01

    The TESLA (TeV Superconducting Linear Accelerator) is operating with a high frequency cavity resonator input coupler. Some technical restraints (such as thermal, mechanical, electrical, vacuum, multipactor discharge phenomena) constrain the development of this coupler. In order to solve these problems, studies have been performed at the French Atomic Energy Commission (CEA) and are presented in this paper

  16. A controls engineering approach for analyzing airplane input-output characteristics

    Science.gov (United States)

    Arbuckle, P. Douglas

    1991-01-01

    An engineering approach for analyzing airplane control and output characteristics is presented. State-space matrix equations describing the linear perturbation dynamics are transformed from physical coordinates into scaled coordinates. The scaling is accomplished by applying various transformations to the system to employ prior engineering knowledge of the airplane physics. Two different analysis techniques are then explained. Modal analysis techniques calculate the influence of each system input on each fundamental mode of motion and the distribution of each mode among the system outputs. The optimal steady state response technique computes the blending of steady state control inputs that optimize the steady state response of selected system outputs. Analysis of an example airplane model is presented to demonstrate the described engineering approach.

  17. Development and optimization of a diode laser for photodynamic therapy.

    Science.gov (United States)

    Lim, Hyun Soo

    2011-01-01

    This study demonstrated the development of a laser system for cancer treatment with photodynamic therapy (PDT) based on a 635 nm laser diode. In order to optimize efficacy in PDT, the ideal laser system should deliver a homogeneous nondivergent light energy with a variable spot size and specific wavelength at a stable output power. We developed a digital laser beam controller using the constant current method to protect the laser diode resonator from the current spikes and other fluctuations, and electrical faults. To improve the PDT effects, the laser system should deliver stable laser energy in continuous wave (CW), burst mode and super burst mode, with variable irradiation times depending on the tumor type and condition. The experimental results showed the diode laser system described herein was eminently suitable for PDT. The laser beam was homogeneous without diverging and the output power increased stably and in a linear manner from 10 mW to 1500 mW according to the increasing input current. Variation between the set and delivered output was less than 7%. The diode laser system developed by the author for use in PDT was compact, user-friendly, and delivered a stable and easily adjustable output power at a specific wavelength and user-set emission modes.

  18. Multiobjective optimization of low impact development stormwater controls

    Science.gov (United States)

    Eckart, Kyle; McPhee, Zach; Bolisetti, Tirupati

    2018-07-01

    Green infrastructure such as Low Impact Development (LID) controls are being employed to manage the urban stormwater and restore the predevelopment hydrological conditions besides improving the stormwater runoff water quality. Since runoff generation and infiltration processes are nonlinear, there is a need for identifying optimal combination of LID controls. A coupled optimization-simulation model was developed by linking the U.S. EPA Stormwater Management Model (SWMM) to the Borg Multiobjective Evolutionary Algorithm (Borg MOEA). The coupled model is capable of performing multiobjective optimization which uses SWMM simulations as a tool to evaluate potential solutions to the optimization problem. The optimization-simulation tool was used to evaluate low impact development (LID) stormwater controls. A SWMM model was developed, calibrated, and validated for a sewershed in Windsor, Ontario and LID stormwater controls were tested for three different return periods. LID implementation strategies were optimized using the optimization-simulation model for five different implementation scenarios for each of the three storm events with the objectives of minimizing peak flow in the stormsewers, reducing total runoff, and minimizing cost. For the sewershed in Windsor, Ontario, the peak run off and total volume of the runoff were found to reduce by 13% and 29%, respectively.

  19. Semidefinite Relaxation-Based Optimization of Multiple-Input Wireless Power Transfer Systems

    Science.gov (United States)

    Lang, Hans-Dieter; Sarris, Costas D.

    2017-11-01

    An optimization procedure for multi-transmitter (MISO) wireless power transfer (WPT) systems based on tight semidefinite relaxation (SDR) is presented. This method ensures physical realizability of MISO WPT systems designed via convex optimization -- a robust, semi-analytical and intuitive route to optimizing such systems. To that end, the nonconvex constraints requiring that power is fed into rather than drawn from the system via all transmitter ports are incorporated in a convex semidefinite relaxation, which is efficiently and reliably solvable by dedicated algorithms. A test of the solution then confirms that this modified problem is equivalent (tight relaxation) to the original (nonconvex) one and that the true global optimum has been found. This is a clear advantage over global optimization methods (e.g. genetic algorithms), where convergence to the true global optimum cannot be ensured or tested. Discussions of numerical results yielded by both the closed-form expressions and the refined technique illustrate the importance and practicability of the new method. It, is shown that this technique offers a rigorous optimization framework for a broad range of current and emerging WPT applications.

  20. Development of an Input Model to MELCOR 1.8.5 for the Ringhals 3 PWR

    International Nuclear Information System (INIS)

    Nilsson, Lars

    2004-12-01

    An input file to the severe accident code MELCOR 1.8.5 has been developed for the Swedish pressurized water reactor Ringhals 3. The aim was to produce a file that can be used for calculations of various postulated severe accident scenarios, although the first application is specifically on cases involving large hydrogen production. The input file is rather detailed with individual modelling of all three cooling loops. The report describes the basis for the Ringhals 3 model and the input preparation step by step and is illustrated by nodalization schemes of the different plant systems. Present version of the report is restricted to the fundamental MELCOR input preparation, and therefore most of the figures of Ringhals 3 measurements and operating parameters are excluded here. These are given in another, complete version of the report, for limited distribution, which includes tables for pertinent data of all components. That version contains appendices with a complete listing of the input files as well as tables of data compiled from a RELAP5 file, that was a major basis for the MELCOR input for the cooling loops. The input was tested in steady-state calculations in order to simulate the initial conditions at current nominal operating conditions in Ringhals 3 for 2775 MW thermal power. The results of the steady-state calculations are presented in the report. Calculations with the MELCOR model will then be carried out of certain accident sequences for comparison with results from earlier MAAP4 calculations. That work will be reported separately

  1. Reexamination of optimal quantum state estimation of pure states

    International Nuclear Information System (INIS)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2005-01-01

    A direct derivation is given for the optimal mean fidelity of quantum state estimation of a d-dimensional unknown pure state with its N copies given as input, which was first obtained by Hayashi in terms of an infinite set of covariant positive operator valued measures (POVM's) and by Bruss and Macchiavello establishing a connection to optimal quantum cloning. An explicit condition for POVM measurement operators for optimal estimators is obtained, by which we construct optimal estimators with finite POVMs using exact quadratures on a hypersphere. These finite optimal estimators are not generally universal, where universality means the fidelity is independent of input states. However, any optimal estimator with finite POVM for M(>N) copies is universal if it is used for N copies as input

  2. Genetic Algorithm-Based Optimization for Surface Roughness in Cylindrically Grinding Process Using Helically Grooved Wheels

    Science.gov (United States)

    Çaydaş, Ulaş; Çelik, Mahmut

    The present work is focused on the optimization of process parameters in cylindrical surface grinding of AISI 1050 steel with grooved wheels. Response surface methodology (RSM) and genetic algorithm (GA) techniques were merged to optimize the input variable parameters of grinding. The revolution speed of workpiece, depth of cut and number of grooves on the wheel were changed to explore their experimental effects on the surface roughness of machined bars. The mathematical models were established between the input parameters and response by using RSM. Then, the developed RSM model was used as objective functions on GA to optimize the process parameters.

  3. A planning support system to optimize approval of private housing development projects

    Science.gov (United States)

    Hussnain, M. Q.; Wakil, K.; Waheed, A.; Tahir, A.

    2016-06-01

    Out of 182 million population of Pakistan, 38% reside in urban areas having an average growth rate of 1.6%, raising the urban housing demand significantly. Poor state response to fulfil the housing needs has resulted in a mushroom growth of private housing schemes (PHS) over the years. Consequently, only in five major cities of Punjab, there are 383 legal and 150 illegal private housing development projects against 120 public sector housing schemes. A major factor behind the cancerous growth of unapproved PHS is the prolonged and delayed approval process in concerned approval authorities requiring 13 months on average. Currently, manual and paper-based approaches are used for vetting and for granting the permission which is highly subjective and non-transparent. This study aims to design a flexible planning support system (PSS) to optimize the vetting process of PHS projects under any development authority in Pakistan by reducing time and cost required for site and documents investigations. Relying on the review of regulatory documents and interviews with professional planners and land developers, this study describes the structure of a PSS developed using open- source geo-spatial tools such as OpenGeo Suite, PHP, and PostgreSQL. It highlights the development of a Knowledge Module (based on regulatory documents) containing equations related to scheme type, size (area), location, access road, components of layout plan, planning standards and other related approval checks. Furthermore, it presents the architecture of the database module and system data requirements categorized as base datasets (built-in part of PSS) and input datasets (related to the housing project under approval). It is practically demonstrated that developing a customized PSS to optimize PHS approval process in Pakistan is achievable with geospatial technology. With the provision of such a system, the approval process for private housing schemes not only becomes quicker and user-friendly but also

  4. Evolving a Method to Capture Science Stakeholder Inputs to Optimize Instrument, Payload, and Program Design

    Science.gov (United States)

    Clark, P. E.; Rilee, M. L.; Curtis, S. A.; Bailin, S.

    2012-03-01

    We are developing Frontier, a highly adaptable, stably reconfigurable, web-accessible intelligent decision engine capable of optimizing design as well as the simulating operation of complex systems in response to evolving needs and environment.

  5. Input design for linear dynamic systems using maxmin criteria

    DEFF Research Database (Denmark)

    Sadegh, Payman; Hansen, Lars H.; Madsen, Henrik

    1998-01-01

    This paper considers the problem of input design for maximizing the smallest eigenvalue of the information matrix for linear dynamic systems. The optimization of the smallest eigenvalue is of interest in parameter estimation and parameter change detection problems. We describe a simple cutting...

  6. Automation of Geometry Input for Building Code Compliance Check

    DEFF Research Database (Denmark)

    Petrova, Ekaterina Aleksandrova; Johansen, Peter Lind; Jensen, Rasmus Lund

    2017-01-01

    Documentation of compliance with the energy performance regulations at the end of the detailed design phase is mandatory for building owners in Denmark. Therefore, besides multidisciplinary input, the building design process requires various iterative analyses, so that the optimal solutions can b...

  7. Modeling and optimization of laser cutting operations

    Directory of Open Access Journals (Sweden)

    Gadallah Mohamed Hassan

    2015-01-01

    Full Text Available Laser beam cutting is one important nontraditional machining process. This paper optimizes the parameters of laser beam cutting parameters of stainless steel (316L considering the effect of input parameters such as power, oxygen pressure, frequency and cutting speed. Statistical design of experiments is carried in three different levels and process responses such as average kerf taper (Ta, surface roughness (Ra and heat affected zones are measured accordingly. A response surface model is developed as a function of the process parameters. Responses predicted by the models (as per Taguchi’s L27OA are employed to search for an optimal combination to achieve desired process yield. Response Surface Models (RSMs are developed for mean responses, S/N ratio, and standard deviation of responses. Optimization models are formulated as single objective optimization problem subject to process constraints. Models are formulated based on Analysis of Variance (ANOVA and optimized using Matlab developed environment. Optimum solutions are compared with Taguchi Methodology results. As such, practicing engineers have means to model, analyze and optimize nontraditional machining processes. Validation experiments are carried to verify the developed models with success.

  8. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems.

    Science.gov (United States)

    Cho, Ming-Yuan; Hoang, Thi Thom

    2017-01-01

    Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  9. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    Ming-Yuan Cho

    2017-01-01

    Full Text Available Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO based support vector machine (SVM classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR method with a pseudorandom binary sequence (PRBS stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  10. Statistical identification of effective input variables

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1982-09-01

    A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications

  11. Optimization modeling of U.S. renewable electricity deployment using local input variables

    Science.gov (United States)

    Bernstein, Adam

    For the past five years, state Renewable Portfolio Standard (RPS) laws have been a primary driver of renewable electricity (RE) deployments in the United States. However, four key trends currently developing: (i) lower natural gas prices, (ii) slower growth in electricity demand, (iii) challenges of system balancing intermittent RE within the U.S. transmission regions, and (iv) fewer economical sites for RE development, may limit the efficacy of RPS laws over the remainder of the current RPS statutes' lifetime. An outsized proportion of U.S. RE build occurs in a small number of favorable locations, increasing the effects of these variables on marginal RE capacity additions. A state-by-state analysis is necessary to study the U.S. electric sector and to generate technology specific generation forecasts. We used LP optimization modeling similar to the National Renewable Energy Laboratory (NREL) Renewable Energy Development System (ReEDS) to forecast RE deployment across the 8 U.S. states with the largest electricity load, and found state-level RE projections to Year 2031 significantly lower than thoseimplied in the Energy Information Administration (EIA) 2013 Annual Energy Outlook forecast. Additionally, the majority of states do not achieve their RPS targets in our forecast. Combined with the tendency of prior research and RE forecasts to focus on larger national and global scale models, we posit that further bottom-up state and local analysis is needed for more accurate policy assessment, forecasting, and ongoing revision of variables as parameter values evolve through time. Current optimization software eliminates much of the need for algorithm coding and programming, allowing for rapid model construction and updating across many customized state and local RE parameters. Further, our results can be tested against the empirical outcomes that will be observed over the coming years, and the forecast deviation from the actuals can be attributed to discrete parameter

  12. Development of a compact and cost effective multi-input digital signal processing system

    Science.gov (United States)

    Darvish-Molla, Sahar; Chin, Kenrick; Prestwich, William V.; Byun, Soo Hyun

    2018-01-01

    A prototype digital signal processing system (DSP) was developed using a microcontroller interfaced with a 12-bit sampling ADC, which offers a considerably inexpensive solution for processing multiple detectors with high throughput. After digitization of the incoming pulses, in order to maximize the output counting rate, a simple algorithm was employed for pulse height analysis. Moreover, an algorithm aiming at the real-time pulse pile-up deconvolution was implemented. The system was tested using a NaI(Tl) detector in comparison with a traditional analogue and commercial digital systems for a variety of count rates. The performance of the prototype system was consistently superior to the analogue and the commercial digital systems up to the input count rate of 61 kcps while was slightly inferior to the commercial digital system but still superior to the analogue system in the higher input rates. Considering overall cost, size and flexibility, this custom made multi-input digital signal processing system (MMI-DSP) was the best reliable choice for the purpose of the 2D microdosimetric data collection, or for any measurement in which simultaneous multi-data collection is required.

  13. Contingency Contractor Optimization Phase 3 Sustainment Database Design Document - Contingency Contractor Optimization Tool - Prototype

    Energy Technology Data Exchange (ETDEWEB)

    Frazier, Christopher Rawls; Durfee, Justin David; Bandlow, Alisa; Gearhart, Jared Lee; Jones, Katherine A

    2016-05-01

    The Contingency Contractor Optimization Tool – Prototype (CCOT-P) database is used to store input and output data for the linear program model described in [1]. The database allows queries to retrieve this data and updating and inserting new input data.

  14. Development of GPT-based optimization algorithm

    International Nuclear Information System (INIS)

    White, J.R.; Chapman, D.M.; Biswas, D.

    1985-01-01

    The University of Lowell and Westinghouse Electric Corporation are involved in a joint effort to evaluate the potential benefits of generalized/depletion perturbation theory (GPT/DTP) methods for a variety of light water reactor (LWR) physics applications. One part of that work has focused on the development of a GPT-based optimization algorithm for the overall design, analysis, and optimization of LWR reload cores. The use of GPT sensitivity data in formulating the fuel management optimization problem is conceptually straightforward; it is the actual execution of the concept that is challenging. Thus, the purpose of this paper is to address some of the major difficulties, to outline our approach to these problems, and to present some illustrative examples of an efficient GTP-based optimization scheme

  15. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    Directory of Open Access Journals (Sweden)

    Keller Alevtina

    2017-01-01

    Full Text Available The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the adequacy of such an algorithm itself allows: evaluating the appropriateness of investments in fixed assets, studying the final financial results of an industrial enterprise, depending on management decisions in the depreciation policy. It is necessary to note that the model in question for the enterprise is always degenerate. It is caused by the presence of zero rows in the matrix of capital expenditures by lines of structural elements unable to generate fixed assets (part of the service units, households, corporate consumers. The paper presents the algorithm for the allocation of depreciation costs for the model. This algorithm was developed by the authors and served as the basis for further development of the flowchart for subsequent implementation with use of software. The construction of such algorithm and its use for dynamic input-output models of industrial enterprises is actualized by international acceptance of the effectiveness of the use of input-output models for national and regional economic systems. This is what allows us to consider that the solutions discussed in the article are of interest to economists of various industrial enterprises.

  16. Mars 2.2 code manual: input requirements

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Lee, Won Jae; Jeong, Jae Jun; Lee, Young Jin; Hwang, Moon Kyu; Kim, Kyung Doo; Lee, Seung Wook; Bae, Sung Won

    2003-07-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This input manual provides a complete list of input required to run MARS. The manual is divided largely into two parts, namely, the one-dimensional part and the multi-dimensional part. The inputs for auxiliary parts such as minor edit requests and graph formatting inputs are shared by the two parts and as such mixed input is possible. The overall structure of the input is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS. MARS development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  17. MARS code manual volume II: input requirements

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This input manual provides a complete list of input required to run MARS. The manual is divided largely into two parts, namely, the one-dimensional part and the multi-dimensional part. The inputs for auxiliary parts such as minor edit requests and graph formatting inputs are shared by the two parts and as such mixed input is possible. The overall structure of the input is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  18. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  19. Design, Fabrication, and Modeling of a Novel Dual-Axis Control Input PZT Gyroscope

    Directory of Open Access Journals (Sweden)

    Cheng-Yang Chang

    2017-10-01

    Full Text Available Conventional gyroscopes are equipped with a single-axis control input, limiting their performance. Although researchers have proposed control algorithms with dual-axis control inputs to improve gyroscope performance, most have verified the control algorithms through numerical simulations because they lacked practical devices with dual-axis control inputs. The aim of this study was to design a piezoelectric gyroscope equipped with a dual-axis control input so that researchers may experimentally verify those control algorithms in future. Designing a piezoelectric gyroscope with a dual-axis control input is more difficult than designing a conventional gyroscope because the control input must be effective over a broad frequency range to compensate for imperfections, and the multiple mode shapes in flexural deformations complicate the relation between flexural deformation and the proof mass position. This study solved these problems by using a lead zirconate titanate (PZT material, introducing additional electrodes for shielding, developing an optimal electrode pattern, and performing calibrations of undesired couplings. The results indicated that the fabricated device could be operated at 5.5±1 kHz to perform dual-axis actuations and position measurements. The calibration of the fabricated device was completed by system identifications of a new dynamic model including gyroscopic motions, electromechanical coupling, mechanical coupling, electrostatic coupling, and capacitive output impedance. Finally, without the assistance of control algorithms, the “open loop sensitivity” of the fabricated gyroscope was 1.82 μV/deg/s with a nonlinearity of 9.5% full-scale output. This sensitivity is comparable with those of other PZT gyroscopes with single-axis control inputs.

  20. Design, Fabrication, and Modeling of a Novel Dual-Axis Control Input PZT Gyroscope.

    Science.gov (United States)

    Chang, Cheng-Yang; Chen, Tsung-Lin

    2017-10-31

    Conventional gyroscopes are equipped with a single-axis control input, limiting their performance. Although researchers have proposed control algorithms with dual-axis control inputs to improve gyroscope performance, most have verified the control algorithms through numerical simulations because they lacked practical devices with dual-axis control inputs. The aim of this study was to design a piezoelectric gyroscope equipped with a dual-axis control input so that researchers may experimentally verify those control algorithms in future. Designing a piezoelectric gyroscope with a dual-axis control input is more difficult than designing a conventional gyroscope because the control input must be effective over a broad frequency range to compensate for imperfections, and the multiple mode shapes in flexural deformations complicate the relation between flexural deformation and the proof mass position. This study solved these problems by using a lead zirconate titanate (PZT) material, introducing additional electrodes for shielding, developing an optimal electrode pattern, and performing calibrations of undesired couplings. The results indicated that the fabricated device could be operated at 5.5±1 kHz to perform dual-axis actuations and position measurements. The calibration of the fabricated device was completed by system identifications of a new dynamic model including gyroscopic motions, electromechanical coupling, mechanical coupling, electrostatic coupling, and capacitive output impedance. Finally, without the assistance of control algorithms, the "open loop sensitivity" of the fabricated gyroscope was 1.82 μV/deg/s with a nonlinearity of 9.5% full-scale output. This sensitivity is comparable with those of other PZT gyroscopes with single-axis control inputs.

  1. Optimization of light quality from color mixing light-emitting diode systems for general lighting

    Science.gov (United States)

    Thorseth, Anders

    2012-03-01

    Given the problem of metamerisms inherent in color mixing in light-emitting diode (LED) systems with more than three distinct colors, a method for optimizing the spectral output of multicolor LED system with regards to standardized light quality parameters has been developed. The composite spectral power distribution from the LEDs are simulated using spectral radiometric measurements of single commercially available LEDs for varying input power, to account for the efficiency droop and other non-linear effects in electrical power vs. light output. The method uses electrical input powers as input parameters in a randomized steepest decent optimization. The resulting spectral power distributions are evaluated with regard to the light quality using the standard characteristics: CIE color rendering index, correlated color temperature and chromaticity distance. The results indicate Pareto optimal boundaries for each system, mapping the capabilities of the simulated lighting systems with regard to the light quality characteristics.

  2. Observer-Based Perturbation Extremum Seeking Control with Input Constraints for Direct-Contact Membrane Distillation Process

    KAUST Repository

    Eleiwi, Fadi; Laleg-Kirati, Taous-Meriem

    2017-01-01

    has pump flow rates as process inputs. The objective of the controller is to optimize the trade-off between the permeate mass flux and the energy consumption by the pumps inside the process. Cases of single and multiple control inputs are considered

  3. Econometric models for biohydrogen development.

    Science.gov (United States)

    Lee, Duu-Hwa; Lee, Duu-Jong; Veziroglu, Ayfer

    2011-09-01

    Biohydrogen is considered as an attractive clean energy source due to its high energy content and environmental-friendly conversion. Analyzing various economic scenarios can help decision makers to optimize development strategies for the biohydrogen sector. This study surveys econometric models of biohydrogen development, including input-out models, life-cycle assessment approach, computable general equilibrium models, linear programming models and impact pathway approach. Fundamentals of each model were briefly reviewed to highlight their advantages and disadvantages. The input-output model and the simplified economic input-output life-cycle assessment model proved most suitable for economic analysis of biohydrogen energy development. A sample analysis using input-output model for forecasting biohydrogen development in the United States is given. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Westinghouse corporate development of a decision software program for Radiological Evaluation Decision Input (REDI)

    International Nuclear Information System (INIS)

    Bush, T.S.

    1995-01-01

    In December 1992, the Department of Energy (DOE) implemented the DOE Radiological Control Manual (RCM). Westinghouse Idaho Nuclear Company, Inc. (WINCO) submitted an implementation plan showing how compliance with the manual would be achieved. This implementation plan was approved by DOE in November 1992. Although WINCO had already been working under a similar Westinghouse RCM, the DOE RCM brought some new and challenging requirements. One such requirement was that of having procedure writers and job planners create the radiological input in work control procedures. Until this time, that information was being provided by radiological engineering or a radiation safety representative. As a result of this requirement, Westinghouse developed the Radiological Evaluation Decision Input (REDI) program

  5. Westinghouse corporate development of a decision software program for Radiological Evaluation Decision Input (REDI)

    Energy Technology Data Exchange (ETDEWEB)

    Bush, T.S. [Westinghosue Idaho Nuclear Co., Inc., Idaho Falls, ID (United States)

    1995-03-01

    In December 1992, the Department of Energy (DOE) implemented the DOE Radiological Control Manual (RCM). Westinghouse Idaho Nuclear Company, Inc. (WINCO) submitted an implementation plan showing how compliance with the manual would be achieved. This implementation plan was approved by DOE in November 1992. Although WINCO had already been working under a similar Westinghouse RCM, the DOE RCM brought some new and challenging requirements. One such requirement was that of having procedure writers and job planners create the radiological input in work control procedures. Until this time, that information was being provided by radiological engineering or a radiation safety representative. As a result of this requirement, Westinghouse developed the Radiological Evaluation Decision Input (REDI) program.

  6. Look Who's Talking: Speech Style and Social Context in Language Input to Infants Are Linked to Concurrent and Future Speech Development

    Science.gov (United States)

    Ramírez-Esparza, Nairán; García-Sierra, Adrián; Kuhl, Patricia K.

    2014-01-01

    Language input is necessary for language learning, yet little is known about whether, in natural environments, the speech style and social context of language input to children impacts language development. In the present study we investigated the relationship between language input and language development, examining both the style of parental…

  7. Optimal Control Development System for Electrical Drives

    Directory of Open Access Journals (Sweden)

    Marian GAICEANU

    2008-08-01

    Full Text Available In this paper the optimal electrical drive development system is presented. It consists of both electrical drive types: DC and AC. In order to implement the optimal control for AC drive system an Altivar 71 inverter, a Frato magnetic particle brake (as load, three-phase induction machine, and dSpace 1104 controller have been used. The on-line solution of the matrix Riccati differential equation (MRDE is computed by dSpace 1104 controller, based on the corresponding feedback signals, generating the optimal speed reference for the AC drive system. The optimal speed reference is tracked by Altivar 71 inverter, conducting to energy reduction in AC drive. The classical control (consisting of rotor field oriented control with PI controllers and the optimal one have been implemented by designing an adequate ControlDesk interface. The three-phase induction machine (IM is controlled at constant flux. Therefore, the linear dynamic mathematical model of the IM has been obtained. The optimal control law provides transient regimes with minimal energy consumption. The obtained solution by integration of the MRDE is orientated towards the numerical implementation-by using a zero order hold. The development system is very useful for researchers, doctoral students or experts training in electrical drive. The experimental results are shown.

  8. Development of a 6 DOF force-reflecting master input device

    International Nuclear Information System (INIS)

    Yoon, Ji Sup; Yoon, Ho Sik

    1999-05-01

    The teleoperator is a very effective tool for various tasks of nuclear application in that it can reduce the operators' exposure to the radiation. For the utmost performances of the teleoperator, the force reflection capability is essential. This capability represents a function of transmitting the contact force of teleoperator with the object to the human operator. With this function the human operator in the remote area can effectively guide the motion of the teleoperator so that it can follow a safety guaranteed path. In this research a fully force reflectible input device 96 axis) is developed. To develop the force reflecting device, the state of art is surveyed. Based on this survey, the 6 DOF manipulator which controls a power manipulator is fabricated and its performance is investigated. Also, various force reflection algorithms analyzed and the enhanced algorithm is proposed. (author). 18 refs., 4 tabs., 26 figs

  9. Development of a 6 DOF force-reflecting master input device

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Ji Sup; Yoon, Ho Sik

    1999-05-01

    The teleoperator is a very effective tool for various tasks of nuclear application in that it can reduce the operators' exposure to the radiation. For the utmost performances of the teleoperator, the force reflection capability is essential. This capability represents a function of transmitting the contact force of teleoperator with the object to the human operator. With this function the human operator in the remote area can effectively guide the motion of the teleoperator so that it can follow a safety guaranteed path. In this research a fully force reflectible input device 96 axis is developed. To develop the force reflecting device, the state of art is surveyed. Based on this survey, the 6 DOF manipulator which controls a power manipulator is fabricated and its performance is investigated. Also, various force reflection algorithms analyzed and the enhanced algorithm is proposed. (author). 18 refs., 4 tabs., 26 fi0008.

  10. Development of a 6 DOF force-reflecting master input device

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Ji Sup; Yoon, Ho Sik

    1999-05-01

    The teleoperator is a very effective tool for various tasks of nuclear application in that it can reduce the operators' exposure to the radiation. For the utmost performances of the teleoperator, the force reflection capability is essential. This capability represents a function of transmitting the contact force of teleoperator with the object to the human operator. With this function the human operator in the remote area can effectively guide the motion of the teleoperator so that it can follow a safety guaranteed path. In this research a fully force reflectible input device 96 axis is developed. To develop the force reflecting device, the state of art is surveyed. Based on this survey, the 6 DOF manipulator which controls a power manipulator is fabricated and its performance is investigated. Also, various force reflection algorithms analyzed and the enhanced algorithm is proposed. (author). 18 refs., 4 tabs., 26 fi0008.

  11. Gestures and multimodal input

    OpenAIRE

    Keates, Simeon; Robinson, Peter

    1999-01-01

    For users with motion impairments, the standard keyboard and mouse arrangement for computer access often presents problems. Other approaches have to be adopted to overcome this. In this paper, we will describe the development of a prototype multimodal input system based on two gestural input channels. Results from extensive user trials of this system are presented. These trials showed that the physical and cognitive loads on the user can quickly become excessive and detrimental to the interac...

  12. Characteristic features of determining the labor input and estimated cost of the development and manufacture of equipment

    Science.gov (United States)

    Kurmanaliyev, T. I.; Breslavets, A. V.

    1974-01-01

    The difficulties in obtaining exact calculation data for the labor input and estimated cost are noted. The method of calculating the labor cost of the design work using the provisional normative indexes with respect to individual forms of operations is proposed. Values of certain coefficients recommended for use in the practical calculations of the labor input for the development of new scientific equipment for space research are presented.

  13. Implementing the cost-optimal methodology in EU countries

    DEFF Research Database (Denmark)

    Atanasiu, Bogdan; Kouloumpi, Ilektra; Thomsen, Kirsten Engelund

    This study presents three cost-optimal calculations. The overall aim is to provide a deeper analysis and to provide additional guidance on how to properly implement the cost-optimality methodology in Member States. Without proper guidance and lessons from exemplary case studies using realistic...... input data (reflecting the likely future development), there is a risk that the cost-optimal methodology may be implemented at sub-optimal levels. This could lead to a misalignment between the defined cost-optimal levels and the long-term goals, leaving a significant energy saving potential unexploited....... Therefore, this study provides more evidence on the implementation of the cost-optimal methodology and highlights the implications of choosing different values for key factors (e.g. discount rates, simulation variants/packages, costs, energy prices) at national levels. The study demonstrates how existing...

  14. Research on magnetorheological damper suspension with permanent magnet and magnetic valve based on developed FOA-optimal control algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Ping; Gao, Hong [Anhui Polytechnic University, Wuhu (China); Niu, Limin [Anhui University of Technology, Maanshan (China)

    2017-07-15

    Due to the fail safe problem, it was difficult for the existing Magnetorheological damper (MD) to be widely applied in automotive suspensions. Therefore, permanent magnets and magnetic valves were introduced to existing MDs so that fail safe problem could be solved by the magnets and damping force could be adjusted easily by the magnetic valve. Thus, a new Magnetorheological damper with permanent magnet and magnetic valve (MDPMMV) was developed and MDPMMV suspension was studied. First of all, mechanical structure of existing magnetorheological damper applied in automobile suspensions was redesigned, comprising a permanent magnet and a magnetic valve. In addition, prediction model of damping force was built based on electromagnetics theory and Bingham model. Experimental research was onducted on the newly designed damper and goodness of fit between experiment results and simulated ones by models was high. On this basis, a quarter suspension model was built. Then, fruit Fly optimization algorithm (FOA)-optimal control algorithm suitable for automobile suspension was designed based on developing normal FOA. Finally, simulation experiments and bench tests with input surface of pulse road and B road were carried out and the results indicated that working erformance of MDPMMV suspension based on FOA-optimal control algorithm was good.

  15. Automated Multivariate Optimization Tool for Energy Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, P. G.; Griffith, B. T.; Long, N.; Torcellini, P. A.; Crawley, D.

    2006-07-01

    Building energy simulations are often used for trial-and-error evaluation of ''what-if'' options in building design--a limited search for an optimal solution, or ''optimization''. Computerized searching has the potential to automate the input and output, evaluate many options, and perform enough simulations to account for the complex interactions among combinations of options. This paper describes ongoing efforts to develop such a tool. The optimization tool employs multiple modules, including a graphical user interface, a database, a preprocessor, the EnergyPlus simulation engine, an optimization engine, and a simulation run manager. Each module is described and the overall application architecture is summarized.

  16. Development of Input Function Measurement System for Small Animal PET Study

    International Nuclear Information System (INIS)

    Kim, Jong Guk; Kim, Byung Su; Kim, Jin Su

    2010-01-01

    For quantitative measurement of radioactivity concentration in tissue and a validated tracer kinetic model, the high sensitive detection system has been required for blood sampling. With the accurate measurement of time activity curves (TACs) of labeled compounds in blood (plasma) enable to provide quantitative information on biological parameters of interest in local tissue. Especially, the development of new tracers for PET imaging requires knowledge of the kinetics of the tracer in the body and in arterial blood and plasma. Conventional approaches of obtaining an input function are to sample arterial blood sequentially by manual as a function of time. Several continuous blood sampling systems have been developed and used in nuclear medicine research field to overcome the limited temporal resolution in sampling by the conventional method. In this work, we developed the high sensitive and unique geometric design of GSO detector for small animal blood activity measurement

  17. Synaptic inputs compete during rapid formation of the calyx of Held: a new model system for neural development.

    Science.gov (United States)

    Holcomb, Paul S; Hoffpauir, Brian K; Hoyson, Mitchell C; Jackson, Dakota R; Deerinck, Thomas J; Marrs, Glenn S; Dehoff, Marlin; Wu, Jonathan; Ellisman, Mark H; Spirou, George A

    2013-08-07

    Hallmark features of neural circuit development include early exuberant innervation followed by competition and pruning to mature innervation topography. Several neural systems, including the neuromuscular junction and climbing fiber innervation of Purkinje cells, are models to study neural development in part because they establish a recognizable endpoint of monoinnervation of their targets and because the presynaptic terminals are large and easily monitored. We demonstrate here that calyx of Held (CH) innervation of its target, which forms a key element of auditory brainstem binaural circuitry, exhibits all of these characteristics. To investigate CH development, we made the first application of serial block-face scanning electron microscopy to neural development with fine temporal resolution and thereby accomplished the first time series for 3D ultrastructural analysis of neural circuit formation. This approach revealed a growth spurt of added apposed surface area (ASA)>200 μm2/d centered on a single age at postnatal day 3 in mice and an initial rapid phase of growth and competition that resolved to monoinnervation in two-thirds of cells within 3 d. This rapid growth occurred in parallel with an increase in action potential threshold, which may mediate selection of the strongest input as the winning competitor. ASAs of competing inputs were segregated on the cell body surface. These data suggest mechanisms to select "winning" inputs by regional reinforcement of postsynaptic membrane to mediate size and strength of competing synaptic inputs.

  18. Effects of Textual Enhancement and Input Enrichment on L2 Development

    Science.gov (United States)

    Rassaei, Ehsan

    2015-01-01

    Research on second language (L2) acquisition has recently sought to include formal instruction into second and foreign language classrooms in a more unobtrusive and implicit manner. Textual enhancement and input enrichment are two techniques which are aimed at drawing learners' attention to specific linguistic features in input and at the same…

  19. Parameter optimization via cuckoo optimization algorithm of fuzzy controller for energy management of a hybrid power system

    International Nuclear Information System (INIS)

    Berrazouane, S.; Mohammedi, K.

    2014-01-01

    Highlights: • Optimized fuzzy logic controller (FLC) for operating a standalone hybrid power system based on cuckoo search algorithm. • Comparison between optimized fuzzy logic controller based on cuckoo search and swarm intelligent. • Loss of power supply probability and levelized energy cost are introduced. - Abstract: This paper presents the development of an optimized fuzzy logic controller (FLC) for operating a standalone hybrid power system based on cuckoo search algorithm. The FLC inputs are batteries state of charge (SOC) and net power flow, FLC outputs are the power rate of batteries, photovoltaic and diesel generator. Data for weekly solar irradiation, ambient temperature and load profile are used to tune the proposed controller by using cuckoo search algorithm. The optimized FLC is able to minimize loss of power supply probability (LPSP), excess energy (EE) and levelized energy cost (LEC). Moreover, the results of CS optimization are better than of particle swarm optimization PSO for fuzzy system controller

  20. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    Science.gov (United States)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in

  1. Original Framework for Optimizing Hybrid Energy Supply

    Directory of Open Access Journals (Sweden)

    Amevi Acakpovi

    2016-01-01

    Full Text Available This paper proposes an original framework for optimizing hybrid energy systems. The recent growth of hybrid energy systems in remote areas across the world added to the increasing cost of renewable energy has triggered the inevitable development of hybrid energy systems. Hybrid energy systems always pose a problem of optimization of cost which has been approached with different perspectives in the recent past. This paper proposes a framework to guide the techniques of optimizing hybrid energy systems in general. The proposed framework comprises four stages including identification of input variables for energy generation, establishment of models of energy generation by individual sources, development of artificial intelligence, and finally summation of selected sources. A case study of a solar, wind, and hydro hybrid system was undertaken with a linear programming approach. Substantial results were obtained with regard to how load requests were constantly satisfied while minimizing the cost of electricity. The developed framework gained its originality from the fact that it has included models of individual sources of energy that even make the optimization problem more complex. This paper also has impacts on the development of policies which will encourage the integration and development of renewable energies.

  2. Development a computer codes to couple PWR-GALE output and PC-CREAM input

    Science.gov (United States)

    Kuntjoro, S.; Budi Setiawan, M.; Nursinta Adi, W.; Deswandri; Sunaryo, G. R.

    2018-02-01

    Radionuclide dispersion analysis is part of an important reactor safety analysis. From the analysis it can be obtained the amount of doses received by radiation workers and communities around nuclear reactor. The radionuclide dispersion analysis under normal operating conditions is carried out using the PC-CREAM code, and it requires input data such as source term and population distribution. Input data is derived from the output of another program that is PWR-GALE and written Population Distribution data in certain format. Compiling inputs for PC-CREAM programs manually requires high accuracy, as it involves large amounts of data in certain formats and often errors in compiling inputs manually. To minimize errors in input generation, than it is make coupling program for PWR-GALE and PC-CREAM programs and a program for writing population distribution according to the PC-CREAM input format. This work was conducted to create the coupling programming between PWR-GALE output and PC-CREAM input and programming to written population data in the required formats. Programming is done by using Python programming language which has advantages of multiplatform, object-oriented and interactive. The result of this work is software for coupling data of source term and written population distribution data. So that input to PC-CREAM program can be done easily and avoid formatting errors. Programming sourceterm coupling program PWR-GALE and PC-CREAM is completed, so that the creation of PC-CREAM inputs in souceterm and distribution data can be done easily and according to the desired format.

  3. Maintenance Optimization of High Voltage Substation Model

    Directory of Open Access Journals (Sweden)

    Radim Bris

    2008-01-01

    Full Text Available The real system from practice is selected for optimization purpose in this paper. We describe the real scheme of a high voltage (HV substation in different work states. Model scheme of the HV substation 22 kV is demonstrated within the paper. The scheme serves as input model scheme for the maintenance optimization. The input reliability and cost parameters of all components are given: the preventive and corrective maintenance costs, the actual maintenance period (being optimized, the failure rate and mean time to repair - MTTR.

  4. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    Science.gov (United States)

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  5. Cortical correlations support optimal sequence memory

    OpenAIRE

    Helias, Moritz; Schuecker, Jannis; Dahmen, David; Goedeke, Sven

    2017-01-01

    The brain processes time-varying input, but is it not known if its dynamical state is optimal for this task. Indeed, recurrent and randomly coupled networks of rate neurons display a rich internal dynamics near the transition to chaos [1], which has been associated with optimal information processing capabilities [2, 3, 4]. In particular, the dynamics becomes arbitrarily slow at the onset of chaos similar to ‘critical slowing down’. The interplay between time-dependent input signals, network ...

  6. Integrated design optimization research and development in an industrial environment

    Science.gov (United States)

    Kumar, V.; German, Marjorie D.; Lee, S.-J.

    1989-01-01

    An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.

  7. Developing optimized prioritizing road maintenance

    Directory of Open Access Journals (Sweden)

    Ewadh Hussein Ali

    2018-01-01

    Full Text Available Increased demand for efficient maintenance of the existing roadway system needs optimal usage of the allocated funds. The paper demonstrates optimized methods for prioritizing maintenance implementation projects. A selected zone of roadway system in Kerbala city represents the study area to demonstrate the application of the developed prioritization process. Paver system PAVER integrated with GIS is used to estimate and display the pavement condition index PCI, thereby to establish a priority of maintenance. In addition to simple ranking method by PCI produced by the output of PAVER, the paper introduces PCI measure for each section of roadway. The paper introduces ranking by multiple measures investigated through expert knowledge about measures that affect prioritization and their irrespective weights due to a predesigned questionnaire. The maintenance priority index (MPI is related to cost of suitable proposed maintenance, easiness of proposed maintenance, average daily traffic and functional classification of the roadway in addition to PCI. Further, incremental benefit-cost analysis ranking provide an optimized process due to benefit and cost of maintenance. The paper introduces efficient display of layout and ranking for the selected zone of roadway system based on MPI index and incremental BCR method. Although the two developed methods introduce different layout display for priority, statistical test shows that no significant difference between ranking of all methods of prioritization.

  8. Factorization properties of the optimal signaling distribution of multi-dimensional QAM constellations

    DEFF Research Database (Denmark)

    Yankov, Metodi Plamenov; Forchhammer, Søren; Larsen, Knud J.

    2014-01-01

    In this work we study the properties of the optimal Proba- bility Mass Function (PMF) of a discrete input to a general Multiple Input Multiple Output (MIMO) channel. We prove that when the input constellation is constructed as a Cartesian product of 1-dimensional constellations, the optimal PMF...... factorizes into the product of the marginal 1D PMFs. This confirms the conjecture made in [1], which allows for optimizing the input PMF efficiently when the rank of the MIMO channel grows. The proof is built upon the iterative Blahut-Arimoto algorithm. We show that if the initial PMF is factorized, the PMF...

  9. Development Optimization and Uncertainty Analysis Methods for Oil and Gas Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Ettehadtavakkol, Amin, E-mail: amin.ettehadtavakkol@ttu.edu [Texas Tech University (United States); Jablonowski, Christopher [Shell Exploration and Production Company (United States); Lake, Larry [University of Texas at Austin (United States)

    2017-04-15

    Uncertainty complicates the development optimization of oil and gas exploration and production projects, but methods have been devised to analyze uncertainty and its impact on optimal decision-making. This paper compares two methods for development optimization and uncertainty analysis: Monte Carlo (MC) simulation and stochastic programming. Two example problems for a gas field development and an oilfield development are solved and discussed to elaborate the advantages and disadvantages of each method. Development optimization involves decisions regarding the configuration of initial capital investment and subsequent operational decisions. Uncertainty analysis involves the quantification of the impact of uncertain parameters on the optimum design concept. The gas field development problem is designed to highlight the differences in the implementation of the two methods and to show that both methods yield the exact same optimum design. The results show that both MC optimization and stochastic programming provide unique benefits, and that the choice of method depends on the goal of the analysis. While the MC method generates more useful information, along with the optimum design configuration, the stochastic programming method is more computationally efficient in determining the optimal solution. Reservoirs comprise multiple compartments and layers with multiphase flow of oil, water, and gas. We present a workflow for development optimization under uncertainty for these reservoirs, and solve an example on the design optimization of a multicompartment, multilayer oilfield development.

  10. Development Optimization and Uncertainty Analysis Methods for Oil and Gas Reservoirs

    International Nuclear Information System (INIS)

    Ettehadtavakkol, Amin; Jablonowski, Christopher; Lake, Larry

    2017-01-01

    Uncertainty complicates the development optimization of oil and gas exploration and production projects, but methods have been devised to analyze uncertainty and its impact on optimal decision-making. This paper compares two methods for development optimization and uncertainty analysis: Monte Carlo (MC) simulation and stochastic programming. Two example problems for a gas field development and an oilfield development are solved and discussed to elaborate the advantages and disadvantages of each method. Development optimization involves decisions regarding the configuration of initial capital investment and subsequent operational decisions. Uncertainty analysis involves the quantification of the impact of uncertain parameters on the optimum design concept. The gas field development problem is designed to highlight the differences in the implementation of the two methods and to show that both methods yield the exact same optimum design. The results show that both MC optimization and stochastic programming provide unique benefits, and that the choice of method depends on the goal of the analysis. While the MC method generates more useful information, along with the optimum design configuration, the stochastic programming method is more computationally efficient in determining the optimal solution. Reservoirs comprise multiple compartments and layers with multiphase flow of oil, water, and gas. We present a workflow for development optimization under uncertainty for these reservoirs, and solve an example on the design optimization of a multicompartment, multilayer oilfield development.

  11. Predictors of Morphosyntactic Growth in Typically Developing Toddlers: Contributions of Parent Input and Child Sex

    Science.gov (United States)

    Hadley, Pamela A.; Rispoli, Matthew; Fitzgerald, Colleen; Bahnsen, Alison

    2011-01-01

    Purpose: Theories of morphosyntactic development must account for between-child differences in morphosyntactic growth rates. This study extends Legate and Yang's (2007) theoretically motivated cross-linguistic approach to determine if variation in properties of parent input accounts for differences in the growth of tense productivity. Method:…

  12. Optimization of Heat Exchangers

    International Nuclear Information System (INIS)

    Catton, Ivan

    2010-01-01

    The objective of this research is to develop tools to design and optimize heat exchangers (HE) and compact heat exchangers (CHE) for intermediate loop heat transport systems found in the very high temperature reator (VHTR) and other Generation IV designs by addressing heat transfer surface augmentation and conjugate modeling. To optimize heat exchanger, a fast running model must be created that will allow for multiple designs to be compared quickly. To model a heat exchanger, volume averaging theory, VAT, is used. VAT allows for the conservation of mass, momentum and energy to be solved for point by point in a 3 dimensional computer model of a heat exchanger. The end product of this project is a computer code that can predict an optimal configuration for a heat exchanger given only a few constraints (input fluids, size, cost, etc.). As VAT computer code can be used to model characteristics (pumping power, temperatures, and cost) of heat exchangers more quickly than traditional CFD or experiment, optimization of every geometric parameter simultaneously can be made. Using design of experiment, DOE and genetric algorithms, GE, to optimize the results of the computer code will improve heat exchanger design.

  13. On Input Vector Representation for the SVR model of Reactor Core Loading Pattern Critical Parameters

    International Nuclear Information System (INIS)

    Trontl, K.; Pevec, D.; Smuc, T.

    2008-01-01

    Determination and optimization of reactor core loading pattern is an important factor in nuclear power plant operation. The goal is to minimize the amount of enriched uranium (fresh fuel) and burnable absorbers placed in the core, while maintaining nuclear power plant operational and safety characteristics. The usual approach to loading pattern optimization involves high degree of engineering judgment, a set of heuristic rules, an optimization algorithm and a computer code used for evaluating proposed loading patterns. The speed of the optimization process is highly dependent on the computer code used for the evaluation. Recently, we proposed a new method for fast loading pattern evaluation based on general robust regression model relying on the state of the art research in the field of machine learning. We employed Support Vector Regression (SVR) technique. SVR is a supervised learning method in which model parameters are automatically determined by solving a quadratic optimization problem. The preliminary tests revealed a good potential of the SVR method application for fast and accurate reactor core loading pattern evaluation. However, some aspects of model development are still unresolved. The main objective of the work reported in this paper was to conduct additional tests and analyses required for full clarification of the SVR applicability for loading pattern evaluation. We focused our attention on the parameters defining input vector, primarily its structure and complexity, and parameters defining kernel functions. All the tests were conducted on the NPP Krsko reactor core, using MCRAC code for the calculation of reactor core loading pattern critical parameters. The tested input vector structures did not influence the accuracy of the models suggesting that the initially tested input vector, consisted of the number of IFBAs and the k-inf at the beginning of the cycle, is adequate. The influence of kernel function specific parameters (σ for RBF kernel

  14. Reconstruction of an input function from a dynamic PET water image using multiple tissue curves

    Science.gov (United States)

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Yuka; Nishiyama, Yoshihiro

    2016-08-01

    Quantification of cerebral blood flow (CBF) is important for the understanding of normal and pathologic brain physiology. When CBF is assessed using PET with {{\\text{H}}2} 15O or C15O2, its calculation requires an arterial input function, which generally requires invasive arterial blood sampling. The aim of the present study was to develop a new technique to reconstruct an image derived input function (IDIF) from a dynamic {{\\text{H}}2} 15O PET image as a completely non-invasive approach. Our technique consisted of using a formula to express the input using tissue curve with rate constant parameter. For multiple tissue curves extracted from the dynamic image, the rate constants were estimated so as to minimize the sum of the differences of the reproduced inputs expressed by the extracted tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects (n  =  29) and was compared to the blood sampling method. Simulation studies were performed to examine the magnitude of potential biases in CBF and to optimize the number of multiple tissue curves used for the input reconstruction. In the PET study, the estimated IDIFs were well reproduced against the measured ones. The difference between the calculated CBF values obtained using the two methods was small as around  PET imaging. This suggests the possibility of using a completely non-invasive technique to assess CBF in patho-physiological studies.

  15. A nonlinear optimal control approach to stabilization of a macroeconomic development model

    Science.gov (United States)

    Rigatos, G.; Siano, P.; Ghosh, T.; Sarno, D.

    2017-11-01

    A nonlinear optimal (H-infinity) control approach is proposed for the problem of stabilization of the dynamics of a macroeconomic development model that is known as the Grossman-Helpman model of endogenous product cycles. The dynamics of the macroeconomic development model is divided in two parts. The first one describes economic activities in a developed country and the second part describes variation of economic activities in a country under development which tries to modify its production so as to serve the needs of the developed country. The article shows that through control of the macroeconomic model of the developed country, one can finally control the dynamics of the economy in the country under development. The control method through which this is achieved is the nonlinear H-infinity control. The macroeconomic model for the country under development undergoes approximate linearization round a temporary operating point. This is defined at each time instant by the present value of the system's state vector and the last value of the control input vector that was exerted on it. The linearization is based on Taylor series expansion and the computation of the associated Jacobian matrices. For the linearized model an H-infinity feedback controller is computed. The controller's gain is calculated by solving an algebraic Riccati equation at each iteration of the control method. The asymptotic stability of the control approach is proven through Lyapunov analysis. This assures that the state variables of the macroeconomic model of the country under development will finally converge to the designated reference values.

  16. An Optimal Lower Eigenvalue System

    Directory of Open Access Journals (Sweden)

    Yingfan Liu

    2011-01-01

    Full Text Available An optimal lower eigenvalue system is studied, and main theorems including a series of necessary and suffcient conditions concerning existence and a Lipschitz continuity result concerning stability are obtained. As applications, solvability results to some von-Neumann-type input-output inequalities, growth, and optimal growth factors, as well as Leontief-type balanced and optimal balanced growth paths, are also gotten.

  17. Dynamic Output Feedback Robust MPC with Input Saturation Based on Zonotopic Set-Membership Estimation

    Directory of Open Access Journals (Sweden)

    Xubin Ping

    2016-01-01

    Full Text Available For quasi-linear parameter varying (quasi-LPV systems with bounded disturbance, a synthesis approach of dynamic output feedback robust model predictive control (OFRMPC with the consideration of input saturation is investigated. The saturated dynamic output feedback controller is represented by a convex hull involving the actual dynamic output controller and an introduced auxiliary controller. By taking both the actual output feedback controller and the auxiliary controller with a parameter-dependent form, the main optimization problem can be formulated as convex optimization. The consideration of input saturation in the main optimization problem reduces the conservatism of dynamic output feedback controller design. The estimation error set and bounded disturbance are represented by zonotopes and refreshed by zonotopic set-membership estimation. Compared with the previous results, the proposed algorithm can not only guarantee the recursive feasibility of the optimization problem, but also improve the control performance at the cost of higher computational burden. A nonlinear continuous stirred tank reactor (CSTR example is given to illustrate the effectiveness of the approach.

  18. Data-Driven Zero-Sum Neuro-Optimal Control for a Class of Continuous-Time Unknown Nonlinear Systems With Disturbance Using ADP.

    Science.gov (United States)

    Wei, Qinglai; Song, Ruizhuo; Yan, Pengfei

    2016-02-01

    This paper is concerned with a new data-driven zero-sum neuro-optimal control problem for continuous-time unknown nonlinear systems with disturbance. According to the input-output data of the nonlinear system, an effective recurrent neural network is introduced to reconstruct the dynamics of the nonlinear system. Considering the system disturbance as a control input, a two-player zero-sum optimal control problem is established. Adaptive dynamic programming (ADP) is developed to obtain the optimal control under the worst case of the disturbance. Three single-layer neural networks, including one critic and two action networks, are employed to approximate the performance index function, the optimal control law, and the disturbance, respectively, for facilitating the implementation of the ADP method. Convergence properties of the ADP method are developed to show that the system state will converge to a finite neighborhood of the equilibrium. The weight matrices of the critic and the two action networks are also convergent to finite neighborhoods of their optimal ones. Finally, the simulation results will show the effectiveness of the developed data-driven ADP methods.

  19. Short-Term Wind Speed Forecasting Using Support Vector Regression Optimized by Cuckoo Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jianzhou Wang

    2015-01-01

    Full Text Available This paper develops an effectively intelligent model to forecast short-term wind speed series. A hybrid forecasting technique is proposed based on recurrence plot (RP and optimized support vector regression (SVR. Wind caused by the interaction of meteorological systems makes itself extremely unsteady and difficult to forecast. To understand the wind system, the wind speed series is analyzed using RP. Then, the SVR model is employed to forecast wind speed, in which the input variables are selected by RP, and two crucial parameters, including the penalties factor and gamma of the kernel function RBF, are optimized by various optimization algorithms. Those optimized algorithms are genetic algorithm (GA, particle swarm optimization algorithm (PSO, and cuckoo optimization algorithm (COA. Finally, the optimized SVR models, including COA-SVR, PSO-SVR, and GA-SVR, are evaluated based on some criteria and a hypothesis test. The experimental results show that (1 analysis of RP reveals that wind speed has short-term predictability on a short-term time scale, (2 the performance of the COA-SVR model is superior to that of the PSO-SVR and GA-SVR methods, especially for the jumping samplings, and (3 the COA-SVR method is statistically robust in multi-step-ahead prediction and can be applied to practical wind farm applications.

  20. Fuzzy portfolio optimization advances in hybrid multi-criteria methodologies

    CERN Document Server

    Gupta, Pankaj; Inuiguchi, Masahiro; Chandra, Suresh

    2014-01-01

    This monograph presents a comprehensive study of portfolio optimization, an important area of quantitative finance. Considering that the information available in financial markets is incomplete and that the markets are affected by vagueness and ambiguity, the monograph deals with fuzzy portfolio optimization models. At first, the book makes the reader familiar with basic concepts, including the classical mean–variance portfolio analysis. Then, it introduces advanced optimization techniques and applies them for the development of various multi-criteria portfolio optimization models in an uncertain environment. The models are developed considering both the financial and non-financial criteria of investment decision making, and the inputs from the investment experts. The utility of these models in practice is then demonstrated using numerical illustrations based on real-world data, which were collected from one of the premier stock exchanges in India. The book addresses both academics and professionals pursuin...

  1. Development of optimization-based probabilistic earthquake scenarios for the city of Tehran

    Science.gov (United States)

    Zolfaghari, M. R.; Peyghaleh, E.

    2016-01-01

    This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less

  2. Optimizing Fuzzy Rule Base for Illumination Compensation in Face Recognition using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Bima Sena Bayu Dewantara

    2014-12-01

    Full Text Available Fuzzy rule optimization is a challenging step in the development of a fuzzy model. A simple two inputs fuzzy model may have thousands of combination of fuzzy rules when it deals with large number of input variations. Intuitively and trial‐error determination of fuzzy rule is very difficult. This paper addresses the problem of optimizing Fuzzy rule using Genetic Algorithm to compensate illumination effect in face recognition. Since uneven illumination contributes negative effects to the performance of face recognition, those effects must be compensated. We have developed a novel algorithmbased on a reflectance model to compensate the effect of illumination for human face recognition. We build a pair of model from a single image and reason those modelsusing Fuzzy.Fuzzy rule, then, is optimized using Genetic Algorithm. This approachspendsless computation cost by still keepinga high performance. Based on the experimental result, we can show that our algorithm is feasiblefor recognizing desired person under variable lighting conditions with faster computation time. Keywords: Face recognition, harsh illumination, reflectance model, fuzzy, genetic algorithm

  3. Tourism and Economic Development in Romania: Input-Output Analysis Perspective

    Directory of Open Access Journals (Sweden)

    MARIUS SURUGIU

    2010-12-01

    Full Text Available Tourism provides a lot of opportunities for sustainable economic development. At local level, by its triggering effect it could represent a factor of economic recovery, by putting to good use the local material and human potential. By its position of predominantly final-branch, tourism exercises to a large impact on national economy by the vector of final demand, for which the possible and/or desirable variant for the future is an economic-social demand that must be satisfied by variants of total output. Using the input-output model (IO model a comparison was made of the matrix of direct technical coefficients (aij and the one of the total requirement coefficients (bij with the assistance of which the direct and propagated effects were determined for this activity by the indicators defining the dimensions of national economy.

  4. A new approach of optimal control for a class of continuous-time chaotic systems by an online ADP algorithm

    Science.gov (United States)

    Song, Rui-Zhuo; Xiao, Wen-Dong; Wei, Qing-Lai

    2014-05-01

    We develop an online adaptive dynamic programming (ADP) based optimal control scheme for continuous-time chaotic systems. The idea is to use the ADP algorithm to obtain the optimal control input that makes the performance index function reach an optimum. The expression of the performance index function for the chaotic system is first presented. The online ADP algorithm is presented to achieve optimal control. In the ADP structure, neural networks are used to construct a critic network and an action network, which can obtain an approximate performance index function and the control input, respectively. It is proven that the critic parameter error dynamics and the closed-loop chaotic systems are uniformly ultimately bounded exponentially. Our simulation results illustrate the performance of the established optimal control method.

  5. Comparative evaluation of various optimization methods and the development of an optimization code system SCOOP

    International Nuclear Information System (INIS)

    Suzuki, Tadakazu

    1979-11-01

    Thirty two programs for linear and nonlinear optimization problems with or without constraints have been developed or incorporated, and their stability, convergence and efficiency have been examined. On the basis of these evaluations, the first version of the optimization code system SCOOP-I has been completed. The SCOOP-I is designed to be an efficient, reliable, useful and also flexible system for general applications. The system enables one to find global optimization point for a wide class of problems by selecting the most appropriate optimization method built in it. (author)

  6. Coil Optimization for HTS Machines

    DEFF Research Database (Denmark)

    Mijatovic, Nenad; Jensen, Bogi Bech; Abrahamsen, Asger Bech

    An optimization approach of HTS coils in HTS synchronous machines (SM) is presented. The optimization is aimed at high power SM suitable for direct driven wind turbines applications. The optimization process was applied to a general radial flux machine with a peak air gap flux density of ~3T...... is suitable for which coil segment is presented. Thus, the performed study gives valuable input for the coil design of HTS machines ensuring optimal usage of HTS tapes....

  7. Optimization in underground mine planning - developments and opportunities

    OpenAIRE

    Musingwini, C.

    2016-01-01

    The application of mining-specific and generic optimization techniques in the mining industry is deeply rooted in the discipline of operations research (OR). OR has its origins in the British Royal Air Force and Army around the early 1930s. Its development continued during and after World War II. The application of OR techniques to optimization in the mining industry started to emerge in the early 1960s. Since then, optimization techniques have been applied to solve widely different mine plan...

  8. Nitrogen input inventory in the Nooksack-Abbotsford-Sumas ...

    Science.gov (United States)

    Nitrogen (N) is an essential biological element, so optimizing N use for food production while minimizing the release of N and co-pollutants to the environment is an important challenge. The Nooksack-Abbotsford-Sumas Transboundary (NAS) Region, spanning a portion of the western interface of British Columbia, Washington state, and the Lummi Nation and the Nooksack Tribe, supports agriculture, fisheries, diverse wildlife, and vibrant urban areas. Groundwater nitrate contamination affects thousands of households in this region. Fisheries and air quality are also affected including periodic closures of shellfish harvest. To reduce the release of N to the environment, successful approaches are needed that partner all stakeholders with appropriate institutions to integrate science, outreach and management efforts. Our goal is to determine the distribution and quantities of N inventories of the watershed. This work synthesizes publicly available data on N sources including deposition, sewage and septic inputs, fertilizer and manure applications, marine-derived N from salmon, and more. The information on cross-boundary N inputs to the landscape will be coupled with stream monitoring data and existing knowledge about N inputs and exports from the watershed to estimate the N residual and inform N management in the search for the environmentally and economically viable and effective solutions. We will estimate the N inputs into the NAS region and transfers within

  9. Input-output analysis of high-speed turbulent jet noise

    Science.gov (United States)

    Jeun, Jinah; Nichols, Joseph W.

    2015-11-01

    We apply input-output analysis to predict and understand the aeroacoustics of high-speed isothermal turbulent jets. We consider axisymmetric linear perturbations about Reynolds-averaged Navier-Stokes solutions of ideally expanded turbulent jets with Mach numbers 0 . 6 parabolized stability equations (PSE), and this mode dominates the response. For subsonic jets, however, the singular values indicate that the contributions of suboptimal modes to noise generation are nearly equal to that of the optimal mode, explaining why PSE misses some of the farfield sound in this case. Finally, high-fidelity large eddy simulation (LES) is used to assess the prevalence of suboptimal modes in the unsteady data. By projecting LES data onto the corresponding input modes, the weighted gain of each mode is examined.

  10. Three Essays on Robust Optimization of Efficient Portfolios

    OpenAIRE

    Liu, Hao

    2013-01-01

    The mean-variance approach was first proposed by Markowitz (1952), and laid the foundation of the modern portfolio theory. Despite its theoretical appeal, the practical implementation of optimized portfolios is strongly restricted by the fact that the two inputs, the means and the covariance matrix of asset returns, are unknown and have to be estimated by available historical information. Due to the estimation risk inherited from inputs, desired properties of estimated optimal portfolios are ...

  11. Optimization Models and Methods Developed at the Energy Systems Institute

    OpenAIRE

    N.I. Voropai; V.I. Zorkaltsev

    2013-01-01

    The paper presents shortly some optimization models of energy system operation and expansion that have been created at the Energy Systems Institute of the Siberian Branch of the Russian Academy of Sciences. Consideration is given to the optimization models of energy development in Russia, a software package intended for analysis of power system reliability, and model of flow distribution in hydraulic systems. A general idea of the optimization methods developed at the Energy Systems Institute...

  12. Building Input Adaptive Parallel Applications: A Case Study of Sparse Grid Interpolation

    KAUST Repository

    Murarasu, Alin

    2012-12-01

    The well-known power wall resulting in multi-cores requires special techniques for speeding up applications. In this sense, parallelization plays a crucial role. Besides standard serial optimizations, techniques such as input specialization can also bring a substantial contribution to the speedup. By identifying common patterns in the input data, we propose new algorithms for sparse grid interpolation that accelerate the state-of-the-art non-specialized version. Sparse grid interpolation is an inherently hierarchical method of interpolation employed for example in computational steering applications for decompressing highdimensional simulation data. In this context, improving the speedup is essential for real-time visualization. Using input specialization, we report a speedup of up to 9x over the nonspecialized version. The paper covers the steps we took to reach this speedup by means of input adaptivity. Our algorithms will be integrated in fastsg, a library for fast sparse grid interpolation. © 2012 IEEE.

  13. Design of LQG Controller for Active Suspension without Considering Road Input Signals

    Directory of Open Access Journals (Sweden)

    Hui Pang

    2017-01-01

    Full Text Available As the road conditions are completely unknown in the design of a suspension controller, an improved linear quadratic and Gaussian distributed (LQG controller is proposed for active suspension system without considering road input signals. The main purpose is to optimize the vehicle body acceleration, pitching angular acceleration, displacement of suspension system, and tire dynamic deflection comprehensively. Meanwhile, it will extend the applicability of the LQG controller. Firstly, the half-vehicle and road input mathematical models of an active suspension system are established, with the weight coefficients of each evaluating indicator optimized by using genetic algorithm (GA. Then, a simulation model is built in Matlab/Simulink environment. Finally, a comparison of simulation is conducted to illustrate that the proposed LQG controller can obtain the better comprehensive performance of vehicle suspension system and improve riding comfort and handling safety compared to the conventional one.

  14. On the design of compliant mechanisms using topology optimization

    DEFF Research Database (Denmark)

    Sigmund, Ole

    1997-01-01

    This paper presents a method for optimal design of compliant mechanism topologies. The method is based on continuum-type topology optimization techniques and finds the optimal compliant mechanism topology within a given design domain and a given position and direction of input and output forces....... By constraining the allowed displacement at the input port, it is possible to control the maximum stress level in the compliant mechanism. The ability of the design method to find a mechanism with complex output behavior is demonstrated by several examples. Some of the optimal mechanism topologies have been...... manufactured, both in macroscale (hand-size) made in Nylon, and in microscale (

  15. Approximating the constellation constrained capacity of the MIMO channel with discrete input

    DEFF Research Database (Denmark)

    Yankov, Metodi Plamenov; Forchhammer, Søren; Larsen, Knud J.

    2015-01-01

    In this paper the capacity of a Multiple Input Multiple Output (MIMO) channel is considered, subject to average power constraint, for multi-dimensional discrete input, in the case when no channel state information is available at the transmitter. We prove that when the constellation size grows, t...... for the equivalent orthogonal channel, obtained by the singular value decomposition. Furthermore, lower bounds on the constrained capacity are derived for the cases of square and tall MIMO matrix, by optimizing the constellation for the equivalent channel, obtained by QR decomposition....

  16. A special role for binocular visual input during development and as a component of occlusion therapy for treatment of amblyopia.

    Science.gov (United States)

    Mitchell, Donald E

    2008-01-01

    To review work on animal models of deprivation amblyopia that points to a special role for binocular visual input in the development of spatial vision and as a component of occlusion (patching) therapy for amblyopia. The studies reviewed employ behavioural methods to measure the effects of various early experiential manipulations on the development of the visual acuity of the two eyes. Short periods of concordant binocular input, if continuous, can offset much longer daily periods of monocular deprivation to allow the development of normal visual acuity in both eyes. It appears that the visual system does not weigh all visual input equally in terms of its ability to impact on the development of vision but instead places greater weight on concordant binocular exposure. Experimental models of patching therapy for amblyopia imposed on animals in which amblyopia had been induced by a prior period of early monocular deprivation, indicate that the benefits of patching therapy may be only temporary and decline rapidly after patching is discontinued. However, when combined with critical amounts of binocular visual input each day, the benefits of patching can be both heightened and made permanent. Taken together with demonstrations of retained binocular connections in the visual cortex of monocularly deprived animals, a strong argument is made for inclusion of specific training of stereoscopic vision for part of the daily periods of binocular exposure that should be incorporated as part of any patching protocol for amblyopia.

  17. Optimal control of stretching process of flexible solar arrays on spacecraft based on a hybrid optimization strategy

    Directory of Open Access Journals (Sweden)

    Qijia Yao

    2017-07-01

    Full Text Available The optimal control of multibody spacecraft during the stretching process of solar arrays is investigated, and a hybrid optimization strategy based on Gauss pseudospectral method (GPM and direct shooting method (DSM is presented. First, the elastic deformation of flexible solar arrays was described approximately by the assumed mode method, and a dynamic model was established by the second Lagrangian equation. Then, the nonholonomic motion planning problem is transformed into a nonlinear programming problem by using GPM. By giving fewer LG points, initial values of the state variables and control variables were obtained. A serial optimization framework was adopted to obtain the approximate optimal solution from a feasible solution. Finally, the control variables were discretized at LG points, and the precise optimal control inputs were obtained by DSM. The optimal trajectory of the system can be obtained through numerical integration. Through numerical simulation, the stretching process of solar arrays is stable with no detours, and the control inputs match the various constraints of actual conditions. The results indicate that the method is effective with good robustness. Keywords: Motion planning, Multibody spacecraft, Optimal control, Gauss pseudospectral method, Direct shooting method

  18. The Optimal Steering Control System using Imperialist Competitive Algorithm on Vehicles with Steer-by-Wire System

    Directory of Open Access Journals (Sweden)

    F. Hunaini

    2015-03-01

    Full Text Available Steer-by-wire is the electrical steering systems on vehicles that are expected with the development of an optimal control system can improve the dynamic performance of the vehicle. This paper aims to optimize the control systems, namely Fuzzy Logic Control (FLC and the Proportional, Integral and Derivative (PID control on the vehicle steering system using Imperialist Competitive Algorithm (ICA. The control systems are built in a cascade, FLC to suppress errors in the lateral motion and the PID control to minimize the error in the yaw motion of the vehicle. FLC is built has two inputs (error and delta error and single output. Each input and output consists of three Membership Function (MF in the form of a triangular for language term "zero" and two trapezoidal for language term "negative" and "positive". In order to work optimally, each MF optimized using ICA to get the position and width of the most appropriate. Likewise, in the PID control, the constant at each Proportional, Integral and Derivative control also optimized using ICA, so there are six parameters of the control system are simultaneously optimized by ICA. Simulations performed on vehicle models with 10 Degree Of Freedom (DOF, the plant input using the variables of steering that expressed in the desired trajectory, and the plant outputs are lateral and yaw motion. The simulation results showed that the FLC-PID control system optimized by using ICA can maintain the movement of vehicle according to the desired trajectory with lower error and higher speed limits than optimized with Particle Swarm Optimization (PSO.

  19. Patient input into the development and enhancement of ED discharge instructions: a focus group study.

    Science.gov (United States)

    Buckley, Barbara A; McCarthy, Danielle M; Forth, Victoria E; Tanabe, Paula; Schmidt, Michael J; Adams, James G; Engel, Kirsten G

    2013-11-01

    Previous research indicates that patients have difficulty understanding ED discharge instructions; these findings have important implications for adherence and outcomes. The objective of this study was to obtain direct patient input to inform specific revisions to discharge documents created through a literacy-guided approach and to identify common themes within patient feedback that can serve as a framework for the creation of discharge documents in the future. Based on extensive literature review and input from ED providers, subspecialists, and health literacy and communication experts, discharge instructions were created for 5 common ED diagnoses. Participants were recruited from a federally qualified health center to participate in a series of 5 focus group sessions. Demographic information was obtained and a Rapid Estimate of Adult Literacy in Medicine (REALM) assessment was performed. During each of the 1-hour focus group sessions, participants reviewed discharge instructions for 1 of 5 diagnoses. Participants were asked to provide input into the content, organization, and presentation of the documents. Using qualitative techniques, latent and manifest content analysis was performed to code for emergent themes across all 5 diagnoses. Fifty-seven percent of participants were female and the average age was 32 years. The average REALM score was 57.3. Through qualitative analysis, 8 emergent themes were identified from the focus groups. Patient input provides meaningful guidance in the development of diagnosis-specific discharge instructions. Several themes and patterns were identified, with broad significance for the design of ED discharge instructions. Copyright © 2013 Emergency Nurses Association. Published by Mosby, Inc. All rights reserved.

  20. Wireless Power Transmission via Sheet Medium Using Automatic Phase Adjustment of Multiple Inputs

    Science.gov (United States)

    Matsuda, Takashi; Oota, Toshifumi; Kado, Youiti; Zhang, Bing

    The wireless power transmission via sheet medium is a novel physical form of communication that utilizes the surface as a medium to provide both data and power transmission services. To efficiently transmit a relatively-large amount of electric power (several watts), we have developed a wireless power transmission system via sheet medium that concentrates the electric power on a specific spot by using phase control of multiple inputs. However, to find the optimal phases of the multiple inputs making the microwave converge on a specific spot in the sheet medium, the prior knowledge of the device's position, and the pre-experiment measuring the output power, are needed. In wireless communication area, it is known that the retrodirective array scheme can efficiently transmit the power in a self-phasing manner, which uses the pilot signals sent by the client devices. In this paper, we apply the retrodirective array scheme to the wireless power transmission system via sheet medium, and propose a power transmission scheme using the phase-adjustment of multiple inputs. To confirm the effectiveness of the proposal scheme, we evaluate its performance by computer simulation and realistic measurement. Both results show that the proposal scheme can achieve the retrodirectivity over the wireless power transmission via sheet medium.

  1. China’s language input system in the digital age affects children’s reading development

    OpenAIRE

    Tan, Li Hai; Xu, Min; Chang, Chun Qi; Siok, Wai Ting

    2012-01-01

    Written Chinese as a logographic system was developed over 3,000 y ago. Historically, Chinese children have learned to read by learning to associate the visuo-graphic properties of Chinese characters with lexical meaning, typically through handwriting. In recent years, however, many Chinese children have learned to use electronic communication devices based on the pinyin input method, which associates phonemes and English letters with characters. When children use pinyin to key in letters, th...

  2. Experimental reversion of the optimal quantum cloning and flipping processes

    International Nuclear Information System (INIS)

    Sciarrino, Fabio; Secondi, Veronica; De Martini, Francesco

    2006-01-01

    The quantum cloner machine maps an unknown arbitrary input qubit into two optimal clones and one optimal flipped qubit. By combining linear and nonlinear optical methods we experimentally implement a scheme that, after the cloning transformation, restores the original input qubit in one of the output channels, by using local measurements, classical communication, and feedforward. This nonlocal method demonstrates how the information on the input qubit can be restored after the cloning process. The realization of the reversion process is expected to find useful applications in the field of modern multipartite quantum cryptography

  3. Optimal control of nonlinear continuous-time systems in strict-feedback form.

    Science.gov (United States)

    Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani

    2015-10-01

    This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.

  4. Adaptive Actor-Critic Design-Based Integral Sliding-Mode Control for Partially Unknown Nonlinear Systems With Input Disturbances.

    Science.gov (United States)

    Fan, Quan-Yong; Yang, Guang-Hong

    2016-01-01

    This paper is concerned with the problem of integral sliding-mode control for a class of nonlinear systems with input disturbances and unknown nonlinear terms through the adaptive actor-critic (AC) control method. The main objective is to design a sliding-mode control methodology based on the adaptive dynamic programming (ADP) method, so that the closed-loop system with time-varying disturbances is stable and the nearly optimal performance of the sliding-mode dynamics can be guaranteed. In the first step, a neural network (NN)-based observer and a disturbance observer are designed to approximate the unknown nonlinear terms and estimate the input disturbances, respectively. Based on the NN approximations and disturbance estimations, the discontinuous part of the sliding-mode control is constructed to eliminate the effect of the disturbances and attain the expected equivalent sliding-mode dynamics. Then, the ADP method with AC structure is presented to learn the optimal control for the sliding-mode dynamics online. Reconstructed tuning laws are developed to guarantee the stability of the sliding-mode dynamics and the convergence of the weights of critic and actor NNs. Finally, the simulation results are presented to illustrate the effectiveness of the proposed method.

  5. DOG -II input generator program for DOT3.5 code

    International Nuclear Information System (INIS)

    Hayashi, Katsumi; Handa, Hiroyuki; Yamada, Koubun; Kamogawa, Susumu; Takatsu, Hideyuki; Koizumi, Kouichi; Seki, Yasushi

    1992-01-01

    DOT3.5 is widely used for radiation transport analysis of fission reactors, fusion experimental facilities and particle accelerators. We developed the input generator program for DOT3.5 code in aim to prepare input data effectively. Formar program DOG was developed and used internally in Hitachi Engineering Company. In this new version DOG-II, limitation for R-Θ geometry was removed. All the input data is created by interactive method in front of color display without using DOT3.5 manual. Also the geometry related input are easily created without calculation of precise curved mesh point. By using DOG-II, reliable input data for DOT3.5 code is obtained easily and quickly

  6. ANN-GA based optimization of a high ash coal-fired supercritical power plant

    International Nuclear Information System (INIS)

    Suresh, M.V.J.J.; Reddy, K.S.; Kolar, Ajit Kumar

    2011-01-01

    Highlights: → Neuro-genetic power plant optimization is found to be an efficient methodology. → Advantage of neuro-genetic algorithm is the possibility of on-line optimization. → Exergy loss in combustor indicates the effect of coal composition on efficiency. -- Abstract: The efficiency of coal-fired power plant depends on various operating parameters such as main steam/reheat steam pressures and temperatures, turbine extraction pressures, and excess air ratio for a given fuel. However, simultaneous optimization of all these operating parameters to achieve the maximum plant efficiency is a challenging task. This study deals with the coupled ANN and GA based (neuro-genetic) optimization of a high ash coal-fired supercritical power plant in Indian climatic condition to determine the maximum possible plant efficiency. The power plant simulation data obtained from a flow-sheet program, 'Cycle-Tempo' is used to train the artificial neural network (ANN) to predict the energy input through fuel (coal). The optimum set of various operating parameters that result in the minimum energy input to the power plant is then determined by coupling the trained ANN model as a fitness function with the genetic algorithm (GA). A unit size of 800 MWe currently under development in India is considered to carry out the thermodynamic analysis based on energy and exergy. Apart from optimizing the design parameters, the developed model can also be used for on-line optimization when quick response is required. Furthermore, the effect of various coals on the thermodynamic performance of the optimized power plant is also determined.

  7. Optimal Airport Surface Traffic Planning Using Mixed-Integer Linear Programming

    Directory of Open Access Journals (Sweden)

    P. C. Roling

    2008-01-01

    Full Text Available We describe an ongoing research effort pertaining to the development of a surface traffic automation system that will help controllers to better coordinate surface traffic movements related to arrival and departure traffic. More specifically, we describe the concept for a taxi-planning support tool that aims to optimize the routing and scheduling of airport surface traffic in such a way as to deconflict the taxi plans while optimizing delay, total taxi-time, or some other airport efficiency metric. Certain input parameters related to resource demand, such as the expected landing times and the expected pushback times, are rather difficult to predict accurately. Due to uncertainty in the input data driving the taxi-planning process, the taxi-planning tool is designed such that it produces solutions that are robust to uncertainty. The taxi-planning concept presented herein, which is based on mixed-integer linear programming, is designed such that it is able to adapt to perturbations in these input conditions, as well as to account for failure in the actual execution of surface trajectories. The capabilities of the tool are illustrated in a simple hypothetical airport.

  8. Remote media vision-based computer input device

    Science.gov (United States)

    Arabnia, Hamid R.; Chen, Ching-Yi

    1991-11-01

    In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.

  9. Experience of web-complex development of NPP thermophysical optimization

    International Nuclear Information System (INIS)

    Nikolaev, M.A.

    2014-01-01

    Current state of developing computation web complex (CWC) of thermophysical optimization of nuclear power plants is described. Main databases of CWC is realized on the MySQL platform. CWC information architecture, its functionality, optimization algorithms and CWC user interface are under consideration [ru

  10. Iterative algorithms for the input and state recovery from the approximate inverse of strictly proper multivariable systems

    Science.gov (United States)

    Chen, Liwen; Xu, Qiang

    2018-02-01

    This paper proposes new iterative algorithms for the unknown input and state recovery from the system outputs using an approximate inverse of the strictly proper linear time-invariant (LTI) multivariable system. One of the unique advantages from previous system inverse algorithms is that the output differentiation is not required. The approximate system inverse is stable due to the systematic optimal design of a dummy feedthrough D matrix in the state-space model via the feedback stabilization. The optimal design procedure avoids trial and error to identify such a D matrix which saves tremendous amount of efforts. From the derived and proved convergence criteria, such an optimal D matrix also guarantees the convergence of algorithms. Illustrative examples show significant improvement of the reference input signal tracking by the algorithms and optimal D design over non-iterative counterparts on controllable or stabilizable LTI systems, respectively. Case studies of two Boeing-767 aircraft aerodynamic models further demonstrate the capability of the proposed methods.

  11. A robust hybrid model integrating enhanced inputs based extreme learning machine with PLSR (PLSR-EIELM) and its application to intelligent measurement.

    Science.gov (United States)

    He, Yan-Lin; Geng, Zhi-Qiang; Xu, Yuan; Zhu, Qun-Xiong

    2015-09-01

    In this paper, a robust hybrid model integrating an enhanced inputs based extreme learning machine with the partial least square regression (PLSR-EIELM) was proposed. The proposed PLSR-EIELM model can overcome two main flaws in the extreme learning machine (ELM), i.e. the intractable problem in determining the optimal number of the hidden layer neurons and the over-fitting phenomenon. First, a traditional extreme learning machine (ELM) is selected. Second, a method of randomly assigning is applied to the weights between the input layer and the hidden layer, and then the nonlinear transformation for independent variables can be obtained from the output of the hidden layer neurons. Especially, the original input variables are regarded as enhanced inputs; then the enhanced inputs and the nonlinear transformed variables are tied together as the whole independent variables. In this way, the PLSR can be carried out to identify the PLS components not only from the nonlinear transformed variables but also from the original input variables, which can remove the correlation among the whole independent variables and the expected outputs. Finally, the optimal relationship model of the whole independent variables with the expected outputs can be achieved by using PLSR. Thus, the PLSR-EIELM model is developed. Then the PLSR-EIELM model served as an intelligent measurement tool for the key variables of the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. The experimental results show that the predictive accuracy of PLSR-EIELM is stable, which indicate that PLSR-EIELM has good robust character. Moreover, compared with ELM, PLSR, hierarchical ELM (HELM), and PLSR-ELM, PLSR-EIELM can achieve much smaller predicted relative errors in these two applications. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    OpenAIRE

    Ahmet Demir; Utku kose

    2017-01-01

    In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an...

  13. Molecular descriptor subset selection in theoretical peptide quantitative structure-retention relationship model development using nature-inspired optimization algorithms.

    Science.gov (United States)

    Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz

    2015-10-06

    In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.

  14. Constrained Fuzzy Predictive Control Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Oussama Ait Sahed

    2015-01-01

    Full Text Available A fuzzy predictive controller using particle swarm optimization (PSO approach is proposed. The aim is to develop an efficient algorithm that is able to handle the relatively complex optimization problem with minimal computational time. This can be achieved using reduced population size and small number of iterations. In this algorithm, instead of using the uniform distribution as in the conventional PSO algorithm, the initial particles positions are distributed according to the normal distribution law, within the area around the best position. The radius limiting this area is adaptively changed according to the tracking error values. Moreover, the choice of the initial best position is based on prior knowledge about the search space landscape and the fact that in most practical applications the dynamic optimization problem changes are gradual. The efficiency of the proposed control algorithm is evaluated by considering the control of the model of a 4 × 4 Multi-Input Multi-Output industrial boiler. This model is characterized by being nonlinear with high interactions between its inputs and outputs, having a nonminimum phase behaviour, and containing instabilities and time delays. The obtained results are compared to those of the control algorithms based on the conventional PSO and the linear approach.

  15. Conceptual Design of GRIG (GUI Based RETRAN Input Generator)

    International Nuclear Information System (INIS)

    Lee, Gyung Jin; Hwang, Su Hyun; Hong, Soon Joon; Lee, Byung Chul; Jang, Chan Su; Um, Kil Sup

    2007-01-01

    For the development of high performance methodology using advanced transient analysis code, it is essential to generate the basic input of transient analysis code by rigorous QA procedures. There are various types of operating NPPs (Nuclear Power Plants) in Korea such as Westinghouse plants, KSNP(Korea Standard Nuclear Power Plant), APR1400 (Advance Power Reactor), etc. So there are some difficulties to generate and manage systematically the input of transient analysis code reflecting the inherent characteristics of various types of NPPs. To minimize the user faults and investment man power and to generate effectively and accurately the basic inputs of transient analysis code for all domestic NPPs, it is needed to develop the program that can automatically generate the basic input, which can be directly applied to the transient analysis, from the NPP design material. ViRRE (Visual RETRAN Running Environment) developed by KEPCO (Korea Electric Power Corporation) and KAERI (Korea Atomic Energy Research Institute) provides convenient working environment for Kori Unit 1/2. ViRRE shows the calculated results through on-line display but its capability is limited on the convenient execution of RETRAN. So it can not be used as input generator. ViSA (Visual System Analyzer) developed by KAERI is a NPA (Nuclear Plant Analyzer) using RETRAN and MARS code as thermal-hydraulic engine. ViSA contains both pre-processing and post-processing functions. In the pre-processing, only the trip data cards and boundary conditions can be changed through GUI mode based on pre-prepared text-input, so the capability of input generation is very limited. SNAP (Symbolic Nuclear Analysis Package) developed by Applied Programming Technology, Inc. and NRC (Nuclear Regulatory Commission) provides efficient working environment for the use of nuclear safety analysis codes such as RELAP5 and TRAC-M codes. SNAP covers wide aspects of thermal-hydraulic analysis from model creation through data analysis

  16. Going beyond Input Quantity: "Wh"-Questions Matter for Toddlers' Language and Cognitive Development

    Science.gov (United States)

    Rowe, Meredith L.; Leech, Kathryn A.; Cabrera, Natasha

    2017-01-01

    There are clear associations between the overall quantity of input children are exposed to and their vocabulary acquisition. However, by uncovering specific features of the input that matter, we can better understand the mechanisms involved in vocabulary learning. We examine whether exposure to "wh"-questions, a challenging quality of…

  17. Recent developments of discrete material optimization of laminated composite structures

    DEFF Research Database (Denmark)

    Lund, Erik; Sørensen, Rene

    2015-01-01

    This work will give a quick summary of recent developments of the Discrete Material Optimization approach for structural optimization of laminated composite structures. This approach can be seen as a multi-material topology optimization approach for selecting the best ply material and number...... of plies in a laminated composite structure. The conceptual combinatorial design problem is relaxed to a continuous problem such that well-established gradient based optimization techniques can be applied, and the optimization problem is solved on basis of interpolation schemes with penalization...

  18. Optimal allocation of testing resources for statistical simulations

    Science.gov (United States)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  19. Developing an optimal electricity generation mix for the UK 2050 future

    International Nuclear Information System (INIS)

    Sithole, H.; Cockerill, T.T.; Hughes, K.J.; Ingham, D.B.; Ma, L.; Porter, R.T.J.; Pourkashanian, M.

    2016-01-01

    The UK electricity sector is undergoing a transition driven by domestic and regional climate change and environmental policies. Aging electricity generating infrastructure is set to affect capacity margins after 2015. These developments, coupled with the increased proportion of inflexible and variable generation technologies will impact on the security of electricity supply. Investment in low-carbon technologies is central to the UK meeting its energy policy objectives. The complexity of these challenges over the future development of the UK electricity generation sector has motivated this study which aims to develop a policy-informed optimal electricity generation scenario to assess the sector's transition to 2050. The study analyses the level of deployment of electricity generating technologies in line with the 80% by 2050 emission target. This is achieved by using an excel-based “Energy Optimisation Calculator” which captures the interaction of various inputs to produce a least-cost generation mix. The key results focus on the least-cost electricity generation portfolio, emission intensity, and total investment required to assemble a sustainable electricity generation mix. A carbon neutral electricity sector is feasible if low-carbon technologies are deployed on a large scale. This requires a robust policy framework that supports the development and deployment of mature and emerging technologies. - Highlights: • Electricity generation decarbonised in 2030 and nearly carbon neutral in 2050. • Nuclear, CCS and offshore wind are central in decarbonising electricity generation. • Uncertainty over future fuel and investment cost has no impact on decarbonisation. • Unabated fossil fuel generation is limited unless with Carbon Capture and Storage. • Decarbonising the electricity generation could cost about £213.4 billion by 2030.

  20. Wireless Sensor Network Optimization: Multi-Objective Paradigm.

    Science.gov (United States)

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-07-20

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks.

  1. Wireless Sensor Network Optimization: Multi-Objective Paradigm

    Science.gov (United States)

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-01-01

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks. PMID:26205271

  2. Uncertainty of input data for room acoustic simulations

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho; Marbjerg, Gerd; Brunskog, Jonas

    2016-01-01

    Although many room acoustic simulation models have been well established, simulation results will never be accurate with inaccurate and uncertain input data. This study addresses inappropriateness and uncertainty of input data for room acoustic simulations. Firstly, the random incidence absorption...... and scattering coefficients are insufficient when simulating highly non-diffuse rooms. More detailed information, such as the phase and angle dependence, can greatly improve the simulation results of pressure-based geometrical and wave-based models at frequencies well below the Schroeder frequency. Phase...... summarizes potential advanced absorption measurement techniques that can improve the quality of input data for room acoustic simulations. Lastly, plenty of uncertain input data are copied from unreliable sources. Software developers and users should be careful when spreading such uncertain input data. More...

  3. Computer-Mediated Input, Output and Feedback in the Development of L2 Word Recognition from Speech

    Science.gov (United States)

    Matthews, Joshua; Cheng, Junyu; O'Toole, John Mitchell

    2015-01-01

    This paper reports on the impact of computer-mediated input, output and feedback on the development of second language (L2) word recognition from speech (WRS). A quasi-experimental pre-test/treatment/post-test research design was used involving three intact tertiary level English as a Second Language (ESL) classes. Classes were either assigned to…

  4. A strategy for integrated low-input potato production

    NARCIS (Netherlands)

    Vereijken, P.H.; Loon, van C.D.

    1991-01-01

    Current systems of potato growing use large amounts of pesticides and fertilizers; these inputs are costly and cause environmental problems. In this paper a strategy for integrated low-input potato production is developed with the aim of reducing costs, improving product quality and reducing

  5. Progress on DART code optimization

    International Nuclear Information System (INIS)

    Taboada, Horacio; Solis, Diego; Rest, Jeffrey

    1999-01-01

    This work consists about the progress made on the design and development of a new optimized version of DART code (DART-P), a mechanistic computer model for the performance calculation and assessment of aluminum dispersion fuel. It is part of a collaboration agreement between CNEA and ANL in the area of Low Enriched Uranium Advanced Fuels. It is held by the Implementation Arrangement for Technical Exchange and Cooperation in the Area of Peaceful Uses of Nuclear Energy, signed on October 16, 1997 between US DOE and the National Atomic Energy Commission of the Argentine Republic. DART optimization is a biannual program; it is operative since February 8, 1999 and has the following goals: 1. Design and develop a new DART calculation kernel for implementation within a parallel processing architecture. 2. Design and develop new user-friendly I/O routines to be resident on Personal Computer (PC)/WorkStation (WS) platform. 2.1. The new input interface will be designed and developed by means of a Visual interface, able to guide the user in the construction of the problem to be analyzed with the aid of a new database (described in item 3, below). The new I/O interface will include input data check controls in order to avoid corrupted input data. 2.2. The new output interface will be designed and developed by means of graphical tools, able to translate numeric data output into 'on line' graphic information. 3. Design and develop a new irradiated materials database, to be resident on PC/WS platform, so as to facilitate the analysis of the behavior of different fuel and meat compositions with DART-P. Currently, a different version of DART is used for oxide, silicide, and advanced alloy fuels. 4. Develop rigorous general inspection algorithms in order to provide valuable DART-P benchmarks. 5. Design and develop new models, such as superplasticity, elastoplastic feedback, improved models for the calculation of fuel deformation and the evolution of the fuel microstructure for

  6. Optimization Model for Machinery Selection of Multi-Crop Farms in Elsuki Agricultural Scheme

    Directory of Open Access Journals (Sweden)

    Mysara Ahmed Mohamed

    2017-07-01

    Full Text Available The optimization machinery model was developed to aid decision-makers and farm machinery managers in determining the optimal number of tractors, scheduling the agricultural operation and minimizing machinery total costs. For purpose of model verification, validation and application input data was collected from primary & secondary sources from Elsuki agricultural scheme for two seasons namely 2011-2012 and 2013-2014. Model verification was made by comparing the numbers of tractors of Elsuki agricultural scheme for season 2011-2012 with those estimated by the model. The model succeeded in reducing the number of tractors and operation total cost by 23%. The effect of optimization model on elements of direct cost saving indicated that the highest cost saving is reached with depreciation, repair and maintenance (23% and the minimum cost saving is attained with fuel cost (22%. Sensitivity analysis in terms of change in model input for each of cultivated area and total costs of operations showing that: Increasing the operation total cost by 10% decreased the total number of tractors after optimization by 23% and total cost of operations was also decreased by 23%. Increasing the cultivated area by 10%, decreased the total number of tractors after optimization by(12% and total cost of operations was also decreased by 12% (16669206 SDG(1111280 $ to 14636376 SDG(975758 $. For the case of multiple input effect of the area and operation total cost resulted in decrease maximum number of tractors by 12%, and the total cost of operations also decreased by 12%. It is recommended to apply the optimization model as pre-requisite for improving machinery management during implementation of machinery scheduling.

  7. TART input manual

    International Nuclear Information System (INIS)

    Kimlinger, J.R.; Plechaty, E.F.

    1982-01-01

    The TART code is a Monte Carlo neutron/photon transport code that is only on the CRAY computer. All the input cards for the TART code are listed, and definitions for all input parameters are given. The execution and limitations of the code are described, and input for two sample problems are given

  8. The long-term development of the energy input in transportation, 1970-2020

    Energy Technology Data Exchange (ETDEWEB)

    Meiren, P B [E.F.C.E.E., Mechelen (Belgium)

    1996-12-01

    This paper is a - modest - statistical and economic analysis of the energy input in the transportation sector over the past twenty-five years (1970 - 1995) and an attempt at looking ahead over the next twenty-five years (1995 - 2020). After World War II passenger cars and trucks became the means of transportation par excellence and are still the main vehicle for moving around, both men and freight. Energy input statistics were born. Let us see what they teach us. (EG)

  9. FLUTAN input specifications

    International Nuclear Information System (INIS)

    Borgwaldt, H.; Baumann, W.; Willerding, G.

    1991-05-01

    FLUTAN is a highly vectorized computer code for 3-D fluiddynamic and thermal-hydraulic analyses in cartesian and cylinder coordinates. It is related to the family of COMMIX codes originally developed at Argonne National Laboratory, USA. To a large extent, FLUTAN relies on basic concepts and structures imported from COMMIX-1B and COMMIX-2 which were made available to KfK in the frame of cooperation contracts in the fast reactor safety field. While on the one hand not all features of the original COMMIX versions have been implemented in FLUTAN, the code on the other hand includes some essential innovative options like CRESOR solution algorithm, general 3-dimensional rebalacing scheme for solving the pressure equation, and LECUSSO-QUICK-FRAM techniques suitable for reducing 'numerical diffusion' in both the enthalphy and momentum equations. This report provides users with detailed input instructions, presents formulations of the various model options, and explains by means of comprehensive sample input, how to use the code. (orig.) [de

  10. On Generating Optimal Signal Probabilities for Random Tests: A Genetic Approach

    Directory of Open Access Journals (Sweden)

    M. Srinivas

    1996-01-01

    Full Text Available Genetic Algorithms are robust search and optimization techniques. A Genetic Algorithm based approach for determining the optimal input distributions for generating random test vectors is proposed in the paper. A cost function based on the COP testability measure for determining the efficacy of the input distributions is discussed. A brief overview of Genetic Algorithms (GAs and the specific details of our implementation are described. Experimental results based on ISCAS-85 benchmark circuits are presented. The performance of our GAbased approach is compared with previous results. While the GA generates more efficient input distributions than the previous methods which are based on gradient descent search, the overheads of the GA in computing the input distributions are larger.

  11. Model reduction of nonlinear systems subject to input disturbances

    KAUST Repository

    Ndoye, Ibrahima

    2017-07-10

    The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order nonlinear system with similar disturbance-output properties to the original plant. The proposed model reduction strategy preserves the nonlinearity and the input disturbance nature of the model. It guarantees a sufficiently small error between the outputs of the original and the reduced-order systems, and also maintains the properties of input-to-state stability. The matrices of the reduced order system are given in terms of a set of linear matrix inequalities (LMIs). The paper concludes with a demonstration of the proposed approach on model reduction of a nonlinear electronic circuit with additive disturbances.

  12. Optimal design of tests for heat exchanger fouling identification

    International Nuclear Information System (INIS)

    Palmer, Kyle A.; Hale, William T.; Such, Kyle D.; Shea, Brian R.; Bollas, George M.

    2016-01-01

    Highlights: • Built-in test design that optimizes the information extractable from the said test. • Method minimizes the covariance of a fault with system uncertainty. • Method applied for the identification and quantification of heat exchanger fouling. • Heat exchanger fouling is identifiable despite the uncertainty in inputs and states. - Graphical Abstract: - Abstract: Particulate fouling in plate fin heat exchangers of aircraft environmental control systems is a recurring issue in environments rich in foreign object debris. Heat exchanger fouling detection, in terms of quantification of its severity, is critical for aircraft maintenance scheduling and safe operation. In this work, we focus on methods for offline fouling detection during aircraft ground handling, where the allowable variability range of admissible inputs is wider. We explore methods of optimal experimental design to estimate heat exchanger inputs and input trajectories that maximize the identifiability of fouling. In particular, we present a methodology in which D-optimality is used as a criterion for statistically significant inference of heat exchanger fouling in uncertain environments. The optimal tests are designed on the basis of a heat exchanger model of the inherent mass, energy and momentum balances, validated against literature data. The model is then used to infer sensitivities of the heat exchanger outputs with respect to fouling metrics and maximize them by manipulating input trajectories; thus enhancing the accuracy in quantifying the fouling extent. The proposed methodology is evaluated with statistical indices of the confidence in estimating thermal fouling resistance at uncertain operating conditions, explored in a series of case studies.

  13. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    Science.gov (United States)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  14. Multiple constant multiplication optimizations for field programmable gate arrays

    CERN Document Server

    Kumm, Martin

    2016-01-01

    This work covers field programmable gate array (FPGA)-specific optimizations of circuits computing the multiplication of a variable by several constants, commonly denoted as multiple constant multiplication (MCM). These optimizations focus on low resource usage but high performance. They comprise the use of fast carry-chains in adder-based constant multiplications including ternary (3-input) adders as well as the integration of look-up table-based constant multipliers and embedded multipliers to get the optimal mapping to modern FPGAs. The proposed methods can be used for the efficient implementation of digital filters, discrete transforms and many other circuits in the domain of digital signal processing, communication and image processing. Contents Heuristic and ILP-Based Optimal Solutions for the Pipelined Multiple Constant Multiplication Problem Methods to Integrate Embedded Multipliers, LUT-Based Constant Multipliers and Ternary (3-Input) Adders An Optimized Multiple Constant Multiplication Architecture ...

  15. Optimally combined regional geoid models for the realization of height systems in developing countries - ORG4heights

    Science.gov (United States)

    Lieb, Verena; Schmidt, Michael; Willberg, Martin; Pail, Roland

    2017-04-01

    Precise height systems require high-resolution and high-quality gravity data. However, such data sets are sparse especially in developing or newly industrializing countries. Thus, we initiated the DFG-project "ORG4heights" for the formulation of a general scientific concept how to (1) optimally combine all available data sets and (2) estimate realistic errors. The resulting regional gravity field models then deliver the fundamental basis for (3) establishing physical national height systems. The innovative key aspects of the project incorporate the development of a method which links (low- up to mid-resolution) gravity satellite mission data and (high- down to low-quality) terrestrial data. Hereby, an optimal combination of the data utilizing their highest measure of information including uncertainty quantification and analyzing systematic omission errors is pursued. Regional gravity field modeling via Multi-Resolution Representation (MRR) and Least Squares Collocation (LSC) are studied in detail and compared based on their theoretical fundamentals. From the findings, MRR shall be further developed towards implementing a pyramid algorithm. Within the project, we investigate comprehensive case studies in Saudi Arabia and South America, i. e. regions with varying topography, by means of simulated data with heterogeneous distribution, resolution, quality and altitude. GPS and tide gauge records serve as complementary input or validation data. The resulting products include error propagation, internal and external validation. A generalized concept then is derived in order to establish physical height systems in developing countries. The recommendations may serve as guidelines for sciences and administration. We present the ideas and strategies of the project, which combines methodical development and practical applications with high socio-economic impact.

  16. Optimal control of evaporator and washer plants

    International Nuclear Information System (INIS)

    Niemi, A.J.

    1989-01-01

    Tests with radioactive tracers were used for experimental analysis of a multiple-effect evaporator plant. The residence time distribution of the liquor in each evaporator was described by one or two perfect mixers with time delay and by-pass flow terms. The theoretical model of a single evaporator unit was set up on the basis of its instantaneous heat and mass balances and such models were fitted to the test data. The results were interpreted in terms of physical structures of the evaporators. Further model parameters were evaluated by conventional step tests and by measurements of process variables at one or more steady states. Computer simulation and comparison with the experimental results showed that the model produces a satisfactory response to solids concentration input and could be extended to cover the steam feed and liquor flow inputs. An optimal feedforward control algorithm was developed for a two unit, co-current evaporator plant. The control criterion comprised the deviations of the final solids content of liquor and the consumption of fresh steam, from their optimal steady-state values. In order to apply the algorithm, the model of the solids in liquor was reduced to two nonlinear differential equations. (author)

  17. GARFEM input deck description

    Energy Technology Data Exchange (ETDEWEB)

    Zdunek, A.; Soederberg, M. (Aeronautical Research Inst. of Sweden, Bromma (Sweden))

    1989-01-01

    The input card deck for the finite element program GARFEM version 3.2 is described in this manual. The program includes, but is not limited to, capabilities to handle the following problems: * Linear bar and beam element structures, * Geometrically non-linear problems (bar and beam), both static and transient dynamic analysis, * Transient response dynamics from a catalog of time varying external forcing function types or input function tables, * Eigenvalue solution (modes and frequencies), * Multi point constraints (MPC) for the modelling of mechanisms and e.g. rigid links. The MPC definition is used only in the geometrically linearized sense, * Beams with disjunct shear axis and neutral axis, * Beams with rigid offset. An interface exist that connects GARFEM with the program GAROS. GAROS is a program for aeroelastic analysis of rotating structures. Since this interface was developed GARFEM now serves as a preprocessor program in place of NASTRAN which was formerly used. Documentation of the methods applied in GARFEM exists but is so far limited to the capacities in existence before the GAROS interface was developed.

  18. On-line efficiency optimization of a synchronous reluctance motor

    Energy Technology Data Exchange (ETDEWEB)

    Lubin, Thierry; Razik, Hubert; Rezzoug, Abderrezak [Groupe de Recherche en Electrotechnique et Electronique de Nancy, GREEN, CNRS-UMR 7037, Universite Henri Poincare, BP 239, 54506 Vandoeuvre-les-Nancy Cedex (France)

    2007-04-15

    This paper deals with an on-line optimum-efficiency control of a synchronous reluctance motor drive. The input power minimization control is implemented with a search controller using Fibonacci search algorithm. It searches the optimal reference value of the d-axis stator current for which the input power is minimum. The input power is calculated from the measured dc-bus current and dc-bus voltage of the inverter. A rotor-oriented vector control of the synchronous reluctance machine with the optimization efficiency controller is achieved with a DSP board (TMS302C31). Experimental results are presented to validate the proposed control methods. It is shown that stability problems can appear during the search process. (author)

  19. Parametric Optimization of Hospital Design

    DEFF Research Database (Denmark)

    Holst, Malene Kirstine; Kirkegaard, Poul Henning; Christoffersen, L.D.

    2013-01-01

    Present paper presents a parametric performancebased design model for optimizing hospital design. The design model operates with geometric input parameters defining the functional requirements of the hospital and input parameters in terms of performance objectives defining the design requirements...... and preferences of the hospital with respect to performances. The design model takes point of departure in the hospital functionalities as a set of defined parameters and rules describing the design requirements and preferences....

  20. Adaptively optimizing stochastic resonance in visual system

    Science.gov (United States)

    Yang, Tao

    1998-08-01

    Recent psychophysics experiment has showed that the noise strength could affect the perceived image quality. This work gives an adaptive process for achieving the optimal perceived image quality in a simple image perception array, which is a simple model of an image sensor. A reference image from memory is used for constructing a cost function and defining the optimal noise strength where the cost function gets its minimum point. The reference image is a binary image, which is used to define the background and the object. Finally, an adaptive algorithm is proposed for searching the optimal noise strength. Computer experimental results show that if the reference image is a thresholded version of the sub-threshold input image then the output of the sensor array gives an optimal output, in which the background and the object have the biggest contrast. If the reference image is different from a thresholded version of the sub-threshold input image then the output usually gives a sub-optimal contrast between the object and the background.

  1. Optimization of the fuel assembly for the Canadian SuperCritical Water-cooled Reactor (SCWR)

    Energy Technology Data Exchange (ETDEWEB)

    French, C., E-mail: Corey.French@cnsc-ccsn.gc.ca [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada); Bonin, H.; Chan, P.K. [Royal Military College of Ontario, Kingston, Ontario (Canada)

    2013-07-01

    An approach to develop a parametric optimization tool to support the Canadian Supercritical Water-cooled Reactor (SCWR) fuel design is presented in this work. The 2D benchmark lattices for 78-pin and 64-pin fuel assemblies are used as the initial models from which fuel performance and subsequent optimization stem from. A tandem optimization procedure is integrated which employs the steepest descent method. The physics codes WIMS-AECL, MCNP6 and SERPENT are used to calculate and verify select performance factors. The results are used as inputs to an optimization algorithm that yield optimal fresh fuel isotopic composition and lattice geometry. Preliminary results on verifications of infinite lattice reactivity are demonstrated in this paper. (author)

  2. Controlling uncertain neutral dynamic systems with delay in control input

    International Nuclear Information System (INIS)

    Park, Ju H.; Kwon, O.

    2005-01-01

    This article gives a novel criterion for the asymptotic stabilization of the zero solutions of a class of neutral systems with delays in control input. By constructing Lyapunov functionals, we have obtained the criterion which is expressed in terms of matrix inequalities. The solutions of the inequalities can be easily solved by efficient convex optimization algorithms. A numerical example is included to illustrate the design procedure of the proposed method

  3. A Development of a System Enables Character Input and PC Operation via Voice for a Physically Disabled Person with a Speech Impediment

    Science.gov (United States)

    Tanioka, Toshimasa; Egashira, Hiroyuki; Takata, Mayumi; Okazaki, Yasuhisa; Watanabe, Kenzi; Kondo, Hiroki

    We have designed and implemented a PC operation support system for a physically disabled person with a speech impediment via voice. Voice operation is an effective method for a physically disabled person with involuntary movement of the limbs and the head. We have applied a commercial speech recognition engine to develop our system for practical purposes. Adoption of a commercial engine reduces development cost and will contribute to make our system useful to another speech impediment people. We have customized commercial speech recognition engine so that it can recognize the utterance of a person with a speech impediment. We have restricted the words that the recognition engine recognizes and separated a target words from similar words in pronunciation to avoid misrecognition. Huge number of words registered in commercial speech recognition engines cause frequent misrecognition for speech impediments' utterance, because their utterance is not clear and unstable. We have solved this problem by narrowing the choice of input down in a small number and also by registering their ambiguous pronunciations in addition to the original ones. To realize all character inputs and all PC operation with a small number of words, we have designed multiple input modes with categorized dictionaries and have introduced two-step input in each mode except numeral input to enable correct operation with small number of words. The system we have developed is in practical level. The first author of this paper is physically disabled with a speech impediment. He has been able not only character input into PC but also to operate Windows system smoothly by using this system. He uses this system in his daily life. This paper is written by him with this system. At present, the speech recognition is customized to him. It is, however, possible to customize for other users by changing words and registering new pronunciation according to each user's utterance.

  4. Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization

    Directory of Open Access Journals (Sweden)

    McDonald Karen

    2011-08-01

    Full Text Available Abstract Background Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. Results The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Conclusion Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net.

  5. Portfolio-Scale Optimization of Customer Energy Efficiency Incentive and Marketing: Cooperative Research and Development Final Report, CRADA Number CRD-13-535

    Energy Technology Data Exchange (ETDEWEB)

    Brackney, Larry J. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-02-17

    North East utility National Grid (NGrid) is developing a portfolio-scale application of OpenStudio designed to optimize incentive and marketing expenditures for their energy efficiency (EE) programs. NGrid wishes to leverage a combination of geographic information systems (GIS), public records, customer data, and content from the Building Component Library (BCL) to form a JavaScript Object Notation (JSON) input file that is consumed by an OpenStudio-based expert system for automated model generation. A baseline model for each customer building will be automatically tuned using electricity and gas consumption data, and a set of energy conservation measures (ECMs) associated with each NGrid incentive program will be applied to the model. The simulated energy performance and return on investment (ROI) will be compared with customer hurdle rates and available incentives to A) optimize the incentive required to overcome the customer hurdle rate and B) determine if marketing activity associated with the specific ECM is warranted for that particular customer. Repeated across their portfolio, this process will enable NGrid to substantially optimize their marketing and incentive expenditures, targeting those customers that will likely adopt and benefit from specific EE programs.

  6. Finding the Quickest Straight-Line Trajectory for a Three-Wheeled Omnidirectional Robot under Input Voltage Constraints

    Directory of Open Access Journals (Sweden)

    Ki Bum Kim

    2015-01-01

    Full Text Available We provide an analytical solution to the problem of generating the quickest straight-line trajectory for a three-wheeled omnidirectional mobile robot, under the practical constraint of limited voltage. Applying the maximum principle to the geometric properties of the input constraints, we find that an optimal input vector of motor voltages has at least one extreme value when the orientation of the robot is fixed and two extreme values when rotation is allowed. We can find an explicit representation of the optimal vector for a motion under fixed orientation. We derive several properties of quickest straight-line trajectories and verify them through simulation. We show that the quickest trajectory when rotation is allowed is always faster than the quickest with fixed orientation.

  7. Multi-Objective Optimization for Analysis of Changing Trade-Offs in the Nepalese Water–Energy–Food Nexus with Hydropower Development

    Directory of Open Access Journals (Sweden)

    Sanita Dhaubanjar

    2017-02-01

    Full Text Available While the water–energy–food nexus approach is becoming increasingly important for more efficient resource utilization and economic development, limited quantitative tools are available to incorporate the approach in decision-making. We propose a spatially explicit framework that couples two well-established water and power system models to develop a decision support tool combining multiple nexus objectives in a linear objective function. To demonstrate our framework, we compare eight Nepalese power development scenarios based on five nexus objectives: minimization of power deficit, maintenance of water availability for irrigation to support food self-sufficiency, reduction in flood risk, maintenance of environmental flows, and maximization of power export. The deterministic multi-objective optimization model is spatially resolved to enable realistic representation of the nexus linkages and accounts for power transmission constraints using an optimal power flow approach. Basin inflows, hydropower plant specifications, reservoir characteristics, reservoir rules, irrigation water demand, environmental flow requirements, power demand, and transmission line properties are provided as model inputs. The trade-offs and synergies among these objectives were visualized for each scenario under multiple environmental flow and power demand requirements. Spatially disaggregated model outputs allowed for the comparison of scenarios not only based on fulfillment of nexus objectives but also scenario compatibility with existing infrastructure, supporting the identification of projects that enhance overall system efficiency. Though the model is applied to the Nepalese nexus from a power development perspective here, it can be extended and adapted for other problems.

  8. Optimizing production under uncertainty

    DEFF Research Database (Denmark)

    Rasmussen, Svend

    This Working Paper derives criteria for optimal production under uncertainty based on the state-contingent approach (Chambers and Quiggin, 2000), and discusses po-tential problems involved in applying the state-contingent approach in a normative context. The analytical approach uses the concept...... of state-contingent production functions and a definition of inputs including both sort of input, activity and alloca-tion technology. It also analyses production decisions where production is combined with trading in state-contingent claims such as insurance contracts. The final part discusses...

  9. Optimization Strategies for Hardware-Based Cofactorization

    Science.gov (United States)

    Loebenberger, Daniel; Putzka, Jens

    We use the specific structure of the inputs to the cofactorization step in the general number field sieve (GNFS) in order to optimize the runtime for the cofactorization step on a hardware cluster. An optimal distribution of bitlength-specific ECM modules is proposed and compared to existing ones. With our optimizations we obtain a speedup between 17% and 33% of the cofactorization step of the GNFS when compared to the runtime of an unoptimized cluster.

  10. Modeling and sliding mode predictive control of the ultra-supercritical boiler-turbine system with uncertainties and input constraints.

    Science.gov (United States)

    Tian, Zhen; Yuan, Jingqi; Zhang, Xiang; Kong, Lei; Wang, Jingcheng

    2018-05-01

    The coordinated control system (CCS) serves as an important role in load regulation, efficiency optimization and pollutant reduction for coal-fired power plants. The CCS faces with tough challenges, such as the wide-range load variation, various uncertainties and constraints. This paper aims to improve the load tacking ability and robustness for boiler-turbine units under wide-range operation. To capture the key dynamics of the ultra-supercritical boiler-turbine system, a nonlinear control-oriented model is developed based on mechanism analysis and model reduction techniques, which is validated with the history operation data of a real 1000 MW unit. To simultaneously address the issues of uncertainties and input constraints, a discrete-time sliding mode predictive controller (SMPC) is designed with the dual-mode control law. Moreover, the input-to-state stability and robustness of the closed-loop system are proved. Simulation results are presented to illustrate the effectiveness of the proposed control scheme, which achieves good tracking performance, disturbance rejection ability and compatibility to input constraints. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Optimal teleportation with a noisy source

    Energy Technology Data Exchange (ETDEWEB)

    Taketani, Bruno G. [Instituto de Fisica, Universidade Federal do Rio de Janeiro, Rio de Janeiro (Brazil); Physikalisches Institut der Albert-Ludwigs-Universitaet, Freiburg im Breisgau (Germany); Melo, Fernando de [Instituut voor Theoretische Fysica, Katholieke Universiteit Leuven, Leuven, Belgie (Belgium); Physikalisches Institut der Albert-Ludwigs-Universitaet, Freiburg im Breisgau (Germany); Matos Filho, Ruynet L. de [Instituto de Fisica, Universidade Federal do Rio de Janeiro, Rio de Janeiro (Brazil)

    2012-07-01

    In this work we discuss the role of decoherence in quantum information protocols. Particularly, we study quantum teleportation in the realistic situation where not only the transmission channel is imperfect, but also the preparation of the state to be teleported. The optimal protocol to be applied in this situation is found and we show that taking into account the input state noise leads to sizable gains in teleportation fidelity. It is then evident that sources of noise in the input state preparation must be taken into consideration in order to maximize the teleportation fidelity. The optimization of the protocol can be defined for specific experimental realizations and accessible operations, giving a trade-off between protocol quality and experiment complexity.

  12. Optimization and Development of Swellable Controlled Porosity ...

    African Journals Online (AJOL)

    Purpose: To develop swellable controlled porosity osmotic pump tablet of theophylline and to define the formulation and process variables responsible for drug release by applying statistical optimization technique. Methods: Formulations were prepared based on Taguchi Orthogonal Array design and Fraction Factorial ...

  13. Design and multi-physics optimization of rotary MRF brakes

    Science.gov (United States)

    Topcu, Okan; Taşcıoğlu, Yiğit; Konukseven, Erhan İlhan

    2018-03-01

    Particle swarm optimization (PSO) is a popular method to solve the optimization problems. However, calculations for each particle will be excessive when the number of particles and complexity of the problem increases. As a result, the execution speed will be too slow to achieve the optimized solution. Thus, this paper proposes an automated design and optimization method for rotary MRF brakes and similar multi-physics problems. A modified PSO algorithm is developed for solving multi-physics engineering optimization problems. The difference between the proposed method and the conventional PSO is to split up the original single population into several subpopulations according to the division of labor. The distribution of tasks and the transfer of information to the next party have been inspired by behaviors of a hunting party. Simulation results show that the proposed modified PSO algorithm can overcome the problem of heavy computational burden of multi-physics problems while improving the accuracy. Wire type, MR fluid type, magnetic core material, and ideal current inputs have been determined by the optimization process. To the best of the authors' knowledge, this multi-physics approach is novel for optimizing rotary MRF brakes and the developed PSO algorithm is capable of solving other multi-physics engineering optimization problems. The proposed method has showed both better performance compared to the conventional PSO and also has provided small, lightweight, high impedance rotary MRF brake designs.

  14. Optimization of a Fuzzy-Logic-Control-Based MPPT Algorithm Using the Particle Swarm Optimization Technique

    Directory of Open Access Journals (Sweden)

    Po-Chen Cheng

    2015-06-01

    Full Text Available In this paper, an asymmetrical fuzzy-logic-control (FLC-based maximum power point tracking (MPPT algorithm for photovoltaic (PV systems is presented. Two membership function (MF design methodologies that can improve the effectiveness of the proposed asymmetrical FLC-based MPPT methods are then proposed. The first method can quickly determine the input MF setting values via the power–voltage (P–V curve of solar cells under standard test conditions (STC. The second method uses the particle swarm optimization (PSO technique to optimize the input MF setting values. Because the PSO approach must target and optimize a cost function, a cost function design methodology that meets the performance requirements of practical photovoltaic generation systems (PGSs is also proposed. According to the simulated and experimental results, the proposed asymmetrical FLC-based MPPT method has the highest fitness value, therefore, it can successfully address the tracking speed/tracking accuracy dilemma compared with the traditional perturb and observe (P&O and symmetrical FLC-based MPPT algorithms. Compared to the conventional FLC-based MPPT method, the obtained optimal asymmetrical FLC-based MPPT can improve the transient time and the MPPT tracking accuracy by 25.8% and 0.98% under STC, respectively.

  15. Development of a VVER-1000 core loading pattern optimization program based on perturbation theory

    International Nuclear Information System (INIS)

    Hosseini, Mohammad; Vosoughi, Naser

    2012-01-01

    Highlights: ► We use perturbation theory to find an optimum fuel loading pattern in a VVER-1000. ► We provide a software for in-core fuel management optimization. ► We consider two objectives for our method (perturbation theory). ► We show that perturbation theory method is very fast and accurate for optimization. - Abstract: In-core nuclear fuel management is one of the most important concerns in the design of nuclear reactors. Two main goals in core fuel loading pattern design optimization are maximizing the core effective multiplication factor in order to extract the maximum energy, and keeping the local power peaking factor lower than a predetermined value to maintain the fuel integrity. Because of the numerous possible patterns of fuel assemblies in the reactor core, finding the best configuration is so important and challenging. Different techniques for optimization of fuel loading pattern in the reactor core have been introduced by now. In this study, a software is programmed in C language to find an order of the fuel loading pattern of a VVER-1000 reactor core using the perturbation theory. Our optimization method is based on minimizing the radial power peaking factor. The optimization process launches by considering an initial loading pattern and the specifications of the fuel assemblies which are given as the input of the software. The results on a typical VVER-1000 reactor reveal that the method could reach to a pattern with an allowed radial power peaking factor and increases the cycle length 1.1 days, as well.

  16. Verification of an optimized condition for low residual stress employed water-shower cooling during welding in austenitic stainless steel plates

    International Nuclear Information System (INIS)

    Yanagida, N.; Enomoto, K.; Anzai, H.

    2004-01-01

    To reduce tensile residual stress in a welded region, we have developed a new cooling method that uses a water-shower behind the welding torch. When this method is applied to the welding of austenitic stainless steel, the welding and cooling conditions mainly determine how much the residual stress can be reduced. To optimize these conditions, we first used a robust design method to determine the effects of the preheating temperature, the heat input quantity, and the water-shower area on the residual stress, and found that, to decrease the tensile residual stress, the preheating temperature should be high, the heat input low, and the water-shower area large. To confirm the effectiveness of these optimized conditions, the residual stresses under optimized or non-optimized conditions were measured experimentally. It was found that the residual stresses were tensile under the non-optimized conditions, but compressive under the optimized ones. These measurements agree well with the 3D-FEM analyses. It can therefore be concluded that the optimized conditions are valid and appropriate for reducing residual stress in an austenitic stainless-steel weld. (orig.)

  17. Social and Linguistic Input in Low-Income African American Mother-Child Dyads from 1 Month through 2 Years: Relations to Vocabulary Development

    Science.gov (United States)

    Shimpi, Priya M.; Fedewa, Alicia; Hans, Sydney

    2012-01-01

    The relation of social and linguistic input measures to early vocabulary development was examined in 30 low-income African American mother-infant pairs. Observations were conducted when the child was 0 years, 1 month (0;1), 0;4, 0;8, 1;0, 1;6, and 2;0. Maternal input was coded for word types and tokens, contingent responsiveness, and…

  18. Applied Gaussian Process in Optimizing Unburned Carbon Content in Fly Ash for Boiler Combustion

    Directory of Open Access Journals (Sweden)

    Chunlin Wang

    2017-01-01

    Full Text Available Recently, Gaussian Process (GP has attracted generous attention from industry. This article focuses on the application of coal fired boiler combustion and uses GP to design a strategy for reducing Unburned Carbon Content in Fly Ash (UCC-FA which is the most important indicator of boiler combustion efficiency. With getting rid of the complicated physical mechanisms, building a data-driven model as GP is an effective way for the proposed issue. Firstly, GP is used to model the relationship between the UCC-FA and boiler combustion operation parameters. The hyperparameters of GP model are optimized via Genetic Algorithm (GA. Then, served as the objective of another GA framework, the predicted UCC-FA from GP model is utilized in searching the optimal operation plan for the boiler combustion. Based on 670 sets of real data from a high capacity tangentially fired boiler, two GP models with 21 and 13 inputs, respectively, are developed. In the experimental results, the model with 21 inputs provides better prediction performance than that of the other. Choosing the results from 21-input model, the UCC-FA decreases from 2.7% to 1.7% via optimizing some of the operational parameters, which is a reasonable achievement for the boiler combustion.

  19. New hybrid multivariate analysis approach to optimize multiple response surfaces considering correlations in both inputs and outputs

    OpenAIRE

    Hejazi, Taha Hossein; Amirkabir University of Technology - Iran; Seyyed-Esfahani, Mirmehdi; Amirkabir University of Technology - Iran; Ramezani, Majid; Amirkabir University of Technology - Iran

    2014-01-01

    Quality control in industrial and service systems requires the correct setting of input factors by which the outputs result at minimum cost with desirable characteristics. There are often more than one input and output in such systems. Response surface methodology in its multiple variable forms is one of the most applied methods to estimate and improve the quality characteristics of products with respect to control factors. When there is some degree of correlation among the variables, the exi...

  20. Service Operations Optimization: Recent Development in Supply Chain Management

    Directory of Open Access Journals (Sweden)

    Bin Shen

    2015-01-01

    Full Text Available Services are the key of success in operation management. Designing the effective strategies by optimization techniques is the fundamental and important condition for performance increase in service operations (SOs management. In this paper, we mainly focus on investigating SOs optimization in the areas of supply chain management, which create the greatest business values. Specifically, we study the recent development of SOs optimization associated with supply chain by categorizing them into four different industries (i.e., e-commerce industry, consumer service industry, public sector, and fashion industry and four various SOs features (i.e., advertising, channel coordination, pricing, and inventory. Moreover, we conduct the technical review on the stylish industries/topics and typical optimization models. The classical optimization approaches for SOs management in supply chain are presented. The managerial implications of SOs in supply chain are discussed.

  1. On a multi-channel transportation loss system with controlled input and controlled service

    Directory of Open Access Journals (Sweden)

    Jewgeni Dshalalow

    1987-01-01

    Full Text Available A multi-channel loss queueing system is investigated. The input stream is a controlled point process. The service in each of m parallel channels depends on the state of the system at certain moments of time when input and service may be controlled. To obtain explicitly the limiting distribution of the main process (Zt (the number of busy channels in equilibrium, an auxiliary three dimensional process with two additional components (one of them is a semi-Markov process is treated as semi-regenerative process. An optimization problem is discussed. Simple expressions for an objective function are derived.

  2. Input-output supervisor

    International Nuclear Information System (INIS)

    Dupuy, R.

    1970-01-01

    The input-output supervisor is the program which monitors the flow of informations between core storage and peripheral equipments of a computer. This work is composed of three parts: 1 - Study of a generalized input-output supervisor. With sample modifications it looks like most of input-output supervisors which are running now on computers. 2 - Application of this theory on a magnetic drum. 3 - Hardware requirement for time-sharing. (author) [fr

  3. Input data required for specific performance assessment codes

    International Nuclear Information System (INIS)

    Seitz, R.R.; Garcia, R.S.; Starmer, R.J.; Dicke, C.A.; Leonard, P.R.; Maheras, S.J.; Rood, A.S.; Smith, R.W.

    1992-02-01

    The Department of Energy's National Low-Level Waste Management Program at the Idaho National Engineering Laboratory generated this report on input data requirements for computer codes to assist States and compacts in their performance assessments. This report gives generators, developers, operators, and users some guidelines on what input data is required to satisfy 22 common performance assessment codes. Each of the codes is summarized and a matrix table is provided to allow comparison of the various input required by the codes. This report does not determine or recommend which codes are preferable

  4. Development of Earthquake Ground Motion Input for Preclosure Seismic Design and Postclosure Performance Assessment of a Geologic Repository at Yucca Mountain, NV

    International Nuclear Information System (INIS)

    I. Wong

    2004-01-01

    This report describes a site-response model and its implementation for developing earthquake ground motion input for preclosure seismic design and postclosure assessment of the proposed geologic repository at Yucca Mountain, Nevada. The model implements a random-vibration theory (RVT), one-dimensional (1D) equivalent-linear approach to calculate site response effects on ground motions. The model provides results in terms of spectral acceleration including peak ground acceleration, peak ground velocity, and dynamically-induced strains as a function of depth. In addition to documenting and validating this model for use in the Yucca Mountain Project, this report also describes the development of model inputs, implementation of the model, its results, and the development of earthquake time history inputs based on the model results. The purpose of the site-response ground motion model is to incorporate the effects on earthquake ground motions of (1) the approximately 300 m of rock above the emplacement levels beneath Yucca Mountain and (2) soil and rock beneath the site of the Surface Facilities Area. A previously performed probabilistic seismic hazard analysis (PSHA) (CRWMS M and O 1998a [DIRS 103731]) estimated ground motions at a reference rock outcrop for the Yucca Mountain site (Point A), but those results do not include these site response effects. Thus, the additional step of applying the site-response ground motion model is required to develop ground motion inputs that are used for preclosure and postclosure purposes

  5. Development of Earthquake Ground Motion Input for Preclosure Seismic Design and Postclosure Performance Assessment of a Geologic Repository at Yucca Mountain, NV

    Energy Technology Data Exchange (ETDEWEB)

    I. Wong

    2004-11-05

    This report describes a site-response model and its implementation for developing earthquake ground motion input for preclosure seismic design and postclosure assessment of the proposed geologic repository at Yucca Mountain, Nevada. The model implements a random-vibration theory (RVT), one-dimensional (1D) equivalent-linear approach to calculate site response effects on ground motions. The model provides results in terms of spectral acceleration including peak ground acceleration, peak ground velocity, and dynamically-induced strains as a function of depth. In addition to documenting and validating this model for use in the Yucca Mountain Project, this report also describes the development of model inputs, implementation of the model, its results, and the development of earthquake time history inputs based on the model results. The purpose of the site-response ground motion model is to incorporate the effects on earthquake ground motions of (1) the approximately 300 m of rock above the emplacement levels beneath Yucca Mountain and (2) soil and rock beneath the site of the Surface Facilities Area. A previously performed probabilistic seismic hazard analysis (PSHA) (CRWMS M&O 1998a [DIRS 103731]) estimated ground motions at a reference rock outcrop for the Yucca Mountain site (Point A), but those results do not include these site response effects. Thus, the additional step of applying the site-response ground motion model is required to develop ground motion inputs that are used for preclosure and postclosure purposes.

  6. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    Science.gov (United States)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  7. SU-F-J-105: Towards a Novel Treatment Planning Pipeline Delivering Pareto- Optimal Plans While Enabling Inter- and Intrafraction Plan Adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Kontaxis, C; Bol, G; Lagendijk, J; Raaymakers, B [University Medical Center Utrecht, Utrecht (Netherlands); Breedveld, S; Sharfo, A; Heijmen, B [Erasmus University Medical Center Rotterdam, Rotterdam (Netherlands)

    2016-06-15

    Purpose: To develop a new IMRT treatment planning methodology suitable for the new generation of MR-linear accelerator machines. The pipeline is able to deliver Pareto-optimal plans and can be utilized for conventional treatments as well as for inter- and intrafraction plan adaptation based on real-time MR-data. Methods: A Pareto-optimal plan is generated using the automated multicriterial optimization approach Erasmus-iCycle. The resulting dose distribution is used as input to the second part of the pipeline, an iterative process which generates deliverable segments that target the latest anatomical state and gradually converges to the prescribed dose. This process continues until a certain percentage of the dose has been delivered. Under a conventional treatment, a Segment Weight Optimization (SWO) is then performed to ensure convergence to the prescribed dose. In the case of inter- and intrafraction adaptation, post-processing steps like SWO cannot be employed due to the changing anatomy. This is instead addressed by transferring the missing/excess dose to the input of the subsequent fraction. In this work, the resulting plans were delivered on a Delta4 phantom as a final Quality Assurance test. Results: A conventional static SWO IMRT plan was generated for two prostate cases. The sequencer faithfully reproduced the input dose for all volumes of interest. For the two cases the mean relative dose difference of the PTV between the ideal input and sequenced dose was 0.1% and −0.02% respectively. Both plans were delivered on a Delta4 phantom and passed the clinical Quality Assurance procedures by achieving 100% pass rate at a 3%/3mm gamma analysis. Conclusion: We have developed a new sequencing methodology capable of online plan adaptation. In this work, we extended the pipeline to support Pareto-optimal input and clinically validated that it can accurately achieve these ideal distributions, while its flexible design enables inter- and intrafraction plan

  8. SU-F-J-105: Towards a Novel Treatment Planning Pipeline Delivering Pareto- Optimal Plans While Enabling Inter- and Intrafraction Plan Adaptation

    International Nuclear Information System (INIS)

    Kontaxis, C; Bol, G; Lagendijk, J; Raaymakers, B; Breedveld, S; Sharfo, A; Heijmen, B

    2016-01-01

    Purpose: To develop a new IMRT treatment planning methodology suitable for the new generation of MR-linear accelerator machines. The pipeline is able to deliver Pareto-optimal plans and can be utilized for conventional treatments as well as for inter- and intrafraction plan adaptation based on real-time MR-data. Methods: A Pareto-optimal plan is generated using the automated multicriterial optimization approach Erasmus-iCycle. The resulting dose distribution is used as input to the second part of the pipeline, an iterative process which generates deliverable segments that target the latest anatomical state and gradually converges to the prescribed dose. This process continues until a certain percentage of the dose has been delivered. Under a conventional treatment, a Segment Weight Optimization (SWO) is then performed to ensure convergence to the prescribed dose. In the case of inter- and intrafraction adaptation, post-processing steps like SWO cannot be employed due to the changing anatomy. This is instead addressed by transferring the missing/excess dose to the input of the subsequent fraction. In this work, the resulting plans were delivered on a Delta4 phantom as a final Quality Assurance test. Results: A conventional static SWO IMRT plan was generated for two prostate cases. The sequencer faithfully reproduced the input dose for all volumes of interest. For the two cases the mean relative dose difference of the PTV between the ideal input and sequenced dose was 0.1% and −0.02% respectively. Both plans were delivered on a Delta4 phantom and passed the clinical Quality Assurance procedures by achieving 100% pass rate at a 3%/3mm gamma analysis. Conclusion: We have developed a new sequencing methodology capable of online plan adaptation. In this work, we extended the pipeline to support Pareto-optimal input and clinically validated that it can accurately achieve these ideal distributions, while its flexible design enables inter- and intrafraction plan

  9. Optimization of MIMO Systems Capacity Using Large Random Matrix Methods

    Directory of Open Access Journals (Sweden)

    Philippe Loubaton

    2012-11-01

    Full Text Available This paper provides a comprehensive introduction of large random matrix methods for input covariance matrix optimization of mutual information of MIMO systems. It is first recalled informally how large system approximations of mutual information can be derived. Then, the optimization of the approximations is discussed, and important methodological points that are not necessarily covered by the existing literature are addressed, including the strict concavity of the approximation, the structure of the argument of its maximum, the accuracy of the large system approach with regard to the number of antennas, or the justification of iterative water-filling optimization algorithms. While the existing papers have developed methods adapted to a specific model, this contribution tries to provide a unified view of the large system approximation approach.

  10. Efficient dynamic optimization of logic programs

    Science.gov (United States)

    Laird, Phil

    1992-01-01

    A summary is given of the dynamic optimization approach to speed up learning for logic programs. The problem is to restructure a recursive program into an equivalent program whose expected performance is optimal for an unknown but fixed population of problem instances. We define the term 'optimal' relative to the source of input instances and sketch an algorithm that can come within a logarithmic factor of optimal with high probability. Finally, we show that finding high-utility unfolding operations (such as EBG) can be reduced to clause reordering.

  11. Pre-processing of input files for the AZTRAN code

    International Nuclear Information System (INIS)

    Vargas E, S.; Ibarra, G.

    2017-09-01

    The AZTRAN code began to be developed in the Nuclear Engineering Department of the Escuela Superior de Fisica y Matematicas (ESFM) of the Instituto Politecnico Nacional (IPN) with the purpose of numerically solving various models arising from the physics and engineering of nuclear reactors. The code is still under development and is part of the AZTLAN platform: Development of a Mexican platform for the analysis and design of nuclear reactors. Due to the complexity to generate an input file for the code, a script based on D language is developed, with the purpose of making its elaboration easier, based on a new input file format which includes specific cards, which have been divided into two blocks, mandatory cards and optional cards, including a pre-processing of the input file to identify possible errors within it, as well as an image generator for the specific problem based on the python interpreter. (Author)

  12. Anterior Cingulate Cortex Input to the Claustrum Is Required for Top-Down Action Control

    Directory of Open Access Journals (Sweden)

    Michael G. White

    2018-01-01

    Full Text Available Summary: Cognitive abilities, such as volitional attention, operate under top-down, executive frontal cortical control of hierarchically lower structures. The circuit mechanisms underlying this process are unresolved. The claustrum possesses interconnectivity with many cortical areas and, thus, is hypothesized to orchestrate the cortical mantle for top-down control. Whether the claustrum receives top-down input and how this input may be processed by the claustrum have yet to be formally tested, however. We reveal that a rich anterior cingulate cortex (ACC input to the claustrum encodes a preparatory top-down information signal on a five-choice response assay that is necessary for optimal task performance. We further show that ACC input monosynaptically targets claustrum inhibitory interneurons and spiny glutamatergic projection neurons, the latter of which amplify ACC input in a manner that is powerfully constrained by claustrum inhibitory microcircuitry. These results demonstrate ACC input to the claustrum is critical for top-down control guiding action. : White et al. show that anterior cingulate cortex (ACC input to the claustrum encodes a top-down preparatory signal on a 5-choice response assay that is critical for task performance. Claustrum microcircuitry amplifies top-down ACC input in a frequency-dependent manner for eventual propagation to the cortex for cognitive control of action. Keywords: 5CSRTT, optogenetics, fiber photometry, microcircuit, attention, bottom-up, sensory cortices, motor cortices

  13. A homotopy algorithm for digital optimal projection control GASD-HADOC

    Science.gov (United States)

    Collins, Emmanuel G., Jr.; Richter, Stephen; Davis, Lawrence D.

    1993-01-01

    The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design.

  14. Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD

    Science.gov (United States)

    Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III

    2004-01-01

    A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.

  15. Input or intimacy

    Directory of Open Access Journals (Sweden)

    Judit Navracsics

    2014-01-01

    Full Text Available According to the critical period hypothesis, the earlier the acquisition of a second language starts, the better. Owing to the plasticity of the brain, up until a certain age a second language can be acquired successfully according to this view. Early second language learners are commonly said to have an advantage over later ones especially in phonetic/phonological acquisition. Native-like pronunciation is said to be most likely to be achieved by young learners. However, there is evidence of accentfree speech in second languages learnt after puberty as well. Occasionally, on the other hand, a nonnative accent may appear even in early second (or third language acquisition. Cross-linguistic influences are natural in multilingual development, and we would expect the dominant language to have an impact on the weaker one(s. The dominant language is usually the one that provides the largest amount of input for the child. But is it always the amount that counts? Perhaps sometimes other factors, such as emotions, ome into play? In this paper, data obtained from an EnglishPersian-Hungarian trilingual pair of siblings (under age 4 and 3 respectively is analyzed, with a special focus on cross-linguistic influences at the phonetic/phonological levels. It will be shown that beyond the amount of input there are more important factors that trigger interference in multilingual development.

  16. Actor-critic-based optimal tracking for partially unknown nonlinear discrete-time systems.

    Science.gov (United States)

    Kiumarsi, Bahare; Lewis, Frank L

    2015-01-01

    This paper presents a partially model-free adaptive optimal control solution to the deterministic nonlinear discrete-time (DT) tracking control problem in the presence of input constraints. The tracking error dynamics and reference trajectory dynamics are first combined to form an augmented system. Then, a new discounted performance function based on the augmented system is presented for the optimal nonlinear tracking problem. In contrast to the standard solution, which finds the feedforward and feedback terms of the control input separately, the minimization of the proposed discounted performance function gives both feedback and feedforward parts of the control input simultaneously. This enables us to encode the input constraints into the optimization problem using a nonquadratic performance function. The DT tracking Bellman equation and tracking Hamilton-Jacobi-Bellman (HJB) are derived. An actor-critic-based reinforcement learning algorithm is used to learn the solution to the tracking HJB equation online without requiring knowledge of the system drift dynamics. That is, two neural networks (NNs), namely, actor NN and critic NN, are tuned online and simultaneously to generate the optimal bounded control policy. A simulation example is given to show the effectiveness of the proposed method.

  17. Critical overview of the development of the optimization requirement and its implementation

    International Nuclear Information System (INIS)

    Gonzalez, A.J.

    1988-01-01

    This paper is intended to provide a critical overview of the development of the optimization requirement of the system of dose limitation recommended by the International Commission on Radiological Protection (ICRP), as well as of its implementation. The concept of optimization began to evolve in the mid-1950s and was formally introduced as a requirement for radiation protection in 1978. Recommendations on its practical implementation have been available for five years. After such a long evolution, it seems reasonable to make a critical assessment of the development of the optimization concept, and this summary paper is intended to provide such an assessment. It does not include a description of the requirement or of any method for implementing it, since it is assumed that optimization is familiar by now. The paper concentrates rather on misunderstandings of and achievements owed to optimization and explores some of their underlying causes, the intention being to draw lessons from past experience and to apply them to future developments in this area. The paper presents an outlook on some remaining policy issues that still pending solution as well as some suggestions on prioritization of implementation of optimization and on standardizing the optimization protection

  18. A multiple ship routing and speed optimization problem under time, cost and environmental objectives

    DEFF Research Database (Denmark)

    Wen, M.; Pacino, Dario; Kontovas, C.A.

    2017-01-01

    The purpose of this paper is to investigate a multiple ship routing and speed optimization problem under time, cost and environmental objectives. A branch and price algorithm as well as a constraint programming model are developed that consider (a) fuel consumption as a function of payload, (b......) fuel price as an explicit input, (c) freight rate as an input, and (d) in-transit cargo inventory costs. The alternative objective functions are minimum total trip duration, minimum total cost and minimum emissions. Computational experience with the algorithm is reported on a variety of scenarios....

  19. The Czech longitudinal study of optimal development

    Czech Academy of Sciences Publication Activity Database

    Kebza, V.; Šolcová, Iva; Kodl, M.; Kernová, V.

    2012-01-01

    Roč. 47, Suppl. 1 (2012), s. 266-266 ISSN 0020-7594. [International Congress of Psychology /30./. 22.07.2012-27.07.2012, Cape Town] R&D Projects: GA ČR GAP407/10/2410 Institutional support: RVO:68081740 Keywords : optimal development * Prague longitudinal study Subject RIV: AN - Psychology

  20. Development of inhibitory synaptic inputs on layer 2/3 pyramidal neurons in the rat medial prefrontal cortex

    KAUST Repository

    Virtanen, Mari A.; Lacoh, Claudia Marvine; Fiumelli, Hubert; Kosel, Markus; Tyagarajan, Shiva; de Roo, Mathias; Vutskits, Laszlo

    2018-01-01

    Inhibitory control of pyramidal neurons plays a major role in governing the excitability in the brain. While spatial mapping of inhibitory inputs onto pyramidal neurons would provide important structural data on neuronal signaling, studying their distribution at the single cell level is difficult due to the lack of easily identifiable anatomical proxies. Here, we describe an approach where in utero electroporation of a plasmid encoding for fluorescently tagged gephyrin into the precursors of pyramidal cells along with ionotophoretic injection of Lucifer Yellow can reliably and specifically detect GABAergic synapses on the dendritic arbour of single pyramidal neurons. Using this technique and focusing on the basal dendritic arbour of layer 2/3 pyramidal cells of the medial prefrontal cortex, we demonstrate an intense development of GABAergic inputs onto these cells between postnatal days 10 and 20. While the spatial distribution of gephyrin clusters was not affected by the distance from the cell body at postnatal day 10, we found that distal dendritic segments appeared to have a higher gephyrin density at later developmental stages. We also show a transient increase around postnatal day 20 in the percentage of spines that are carrying a gephyrin cluster, indicative of innervation by a GABAergic terminal. Since the precise spatial arrangement of synaptic inputs is an important determinant of neuronal responses, we believe that the method described in this work may allow a better understanding of how inhibition settles together with excitation, and serve as basics for further modelling studies focusing on the geometry of dendritic inhibition during development.

  1. Development of inhibitory synaptic inputs on layer 2/3 pyramidal neurons in the rat medial prefrontal cortex

    KAUST Repository

    Virtanen, Mari A.

    2018-01-10

    Inhibitory control of pyramidal neurons plays a major role in governing the excitability in the brain. While spatial mapping of inhibitory inputs onto pyramidal neurons would provide important structural data on neuronal signaling, studying their distribution at the single cell level is difficult due to the lack of easily identifiable anatomical proxies. Here, we describe an approach where in utero electroporation of a plasmid encoding for fluorescently tagged gephyrin into the precursors of pyramidal cells along with ionotophoretic injection of Lucifer Yellow can reliably and specifically detect GABAergic synapses on the dendritic arbour of single pyramidal neurons. Using this technique and focusing on the basal dendritic arbour of layer 2/3 pyramidal cells of the medial prefrontal cortex, we demonstrate an intense development of GABAergic inputs onto these cells between postnatal days 10 and 20. While the spatial distribution of gephyrin clusters was not affected by the distance from the cell body at postnatal day 10, we found that distal dendritic segments appeared to have a higher gephyrin density at later developmental stages. We also show a transient increase around postnatal day 20 in the percentage of spines that are carrying a gephyrin cluster, indicative of innervation by a GABAergic terminal. Since the precise spatial arrangement of synaptic inputs is an important determinant of neuronal responses, we believe that the method described in this work may allow a better understanding of how inhibition settles together with excitation, and serve as basics for further modelling studies focusing on the geometry of dendritic inhibition during development.

  2. Numerical optimization of alignment reproducibility for customizable surgical guides.

    Science.gov (United States)

    Kroes, Thomas; Valstar, Edward; Eisemann, Elmar

    2015-10-01

    Computer-assisted orthopedic surgery aims at minimizing invasiveness, postoperative pain, and morbidity with computer-assisted preoperative planning and intra-operative guidance techniques, of which camera-based navigation and patient-specific templates (PST) are the most common. PSTs are one-time templates that guide the surgeon initially in cutting slits or drilling holes. This method can be extended to reusable and customizable surgical guides (CSG), which can be adapted to the patients' bone. Determining the right set of CSG input parameters by hand is a challenging task, given the vast amount of input parameter combinations and the complex physical interaction between the PST/CSG and the bone. This paper introduces a novel algorithm to solve the problem of choosing the right set of input parameters. Our approach predicts how well a CSG instance is able to reproduce the planned alignment based on a physical simulation and uses a genetic optimization algorithm to determine optimal configurations. We validate our technique with a prototype of a pin-based CSG and nine rapid prototyped distal femora. The proposed optimization technique has been compared to manual optimization by experts, as well as participants with domain experience. Using the optimization technique, the alignment errors remained within practical boundaries of 1.2 mm translation and [Formula: see text] rotation error. In all cases, the proposed method outperformed manual optimization. Manually optimizing CSG parameters turns out to be a counterintuitive task. Even after training, subjects with and without anatomical background fail in choosing appropriate CSG configurations. Our optimization algorithm ensures that the CSG is configured correctly, and we could demonstrate that the intended alignment of the CSG is accurately reproduced on all tested bone geometries.

  3. Modeling and Optimization of a CoolingTower-Assisted Heat Pump System

    Directory of Open Access Journals (Sweden)

    Xiaoqing Wei

    2017-05-01

    Full Text Available To minimize the total energy consumption of a cooling tower-assisted heat pump (CTAHP system in cooling mode, a model-based control strategy with hybrid optimization algorithm for the system is presented in this paper. An existing experimental device, which mainly contains a closed wet cooling tower with counter flow construction, a condenser water loop and a water-to-water heat pump unit, is selected as the study object. Theoretical and empirical models of the related components and their interactions are developed. The four variables, viz. desired cooling load, ambient wet-bulb temperature, temperature and flow rate of chilled water at the inlet of evaporator, are set to independent variables. The system power consumption can be minimized by optimizing input powers of cooling tower fan, spray water pump, condenser water pump and compressor. The optimal input power of spray water pump is determined experimentally. Implemented on MATLAB, a hybrid optimization algorithm, which combines the Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS algorithm with the greedy diffusion search (GDS algorithm, is incorporated to solve the minimization problem of energy consumption and predict the system’s optimal set-points under quasi-steady-state conditions. The integrated simulation tool is validated against experimental data. The results obtained demonstrate the proposed operation strategy is reliable, and can save energy by 20.8% as compared to an uncontrolled system under certain testing conditions.

  4. Surrogate-based optimization of hydraulic fracturing in pre-existing fracture networks

    Science.gov (United States)

    Chen, Mingjie; Sun, Yunwei; Fu, Pengcheng; Carrigan, Charles R.; Lu, Zhiming; Tong, Charles H.; Buscheck, Thomas A.

    2013-08-01

    Hydraulic fracturing has been used widely to stimulate production of oil, natural gas, and geothermal energy in formations with low natural permeability. Numerical optimization of fracture stimulation often requires a large number of evaluations of objective functions and constraints from forward hydraulic fracturing models, which are computationally expensive and even prohibitive in some situations. Moreover, there are a variety of uncertainties associated with the pre-existing fracture distributions and rock mechanical properties, which affect the optimized decisions for hydraulic fracturing. In this study, a surrogate-based approach is developed for efficient optimization of hydraulic fracturing well design in the presence of natural-system uncertainties. The fractal dimension is derived from the simulated fracturing network as the objective for maximizing energy recovery sweep efficiency. The surrogate model, which is constructed using training data from high-fidelity fracturing models for mapping the relationship between uncertain input parameters and the fractal dimension, provides fast approximation of the objective functions and constraints. A suite of surrogate models constructed using different fitting methods is evaluated and validated for fast predictions. Global sensitivity analysis is conducted to gain insights into the impact of the input variables on the output of interest, and further used for parameter screening. The high efficiency of the surrogate-based approach is demonstrated for three optimization scenarios with different and uncertain ambient conditions. Our results suggest the critical importance of considering uncertain pre-existing fracture networks in optimization studies of hydraulic fracturing.

  5. Development and Optimization of controlled drug release ...

    African Journals Online (AJOL)

    The aim of this study is to develop and optimize an osmotically controlled drug delivery system of diclofenac sodium. Osmotically controlled oral drug delivery systems utilize osmotic pressure for controlled delivery of active drugs. Drug delivery from these systems, to a large extent, is independent of the physiological factors ...

  6. Enhancement of information transmission with stochastic resonance in hippocampal CA1 neuron models: effects of noise input location.

    Science.gov (United States)

    Kawaguchi, Minato; Mino, Hiroyuki; Durand, Dominique M

    2007-01-01

    Stochastic resonance (SR) has been shown to enhance the signal to noise ratio or detection of signals in neurons. It is not yet clear how this effect of SR on the signal to noise ratio affects signal processing in neural networks. In this paper, we investigate the effects of the location of background noise input on information transmission in a hippocampal CA1 neuron model. In the computer simulation, random sub-threshold spike trains (signal) generated by a filtered homogeneous Poisson process were presented repeatedly to the middle point of the main apical branch, while the homogeneous Poisson shot noise (background noise) was applied to a location of the dendrite in the hippocampal CA1 model consisting of the soma with a sodium, a calcium, and five potassium channels. The location of the background noise input was varied along the dendrites to investigate the effects of background noise input location on information transmission. The computer simulation results show that the information rate reached a maximum value for an optimal amplitude of the background noise amplitude. It is also shown that this optimal amplitude of the background noise is independent of the distance between the soma and the noise input location. The results also show that the location of the background noise input does not significantly affect the maximum values of the information rates generated by stochastic resonance.

  7. Realizing an Optimization Approach Inspired from Piaget’s Theory on Cognitive Development

    Directory of Open Access Journals (Sweden)

    Utku Kose

    2015-09-01

    Full Text Available The objective of this paper is to introduce an artificial intelligence based optimization approach, which is inspired from Piaget’s theory on cognitive development. The approach has been designed according to essential processes that an individual may experience while learning something new or improving his / her knowledge. These processes are associated with the Piaget’s ideas on an individual’s cognitive development. The approach expressed in this paper is a simple algorithm employing swarm intelligence oriented tasks in order to overcome single-objective optimization problems. For evaluating effectiveness of this early version of the algorithm, test operations have been done via some benchmark functions. The obtained results show that the approach / algorithm can be an alternative to the literature in terms of single-objective optimization.The authors have suggested the name: Cognitive Development Optimization Algorithm (CoDOA for the related intelligent optimization approach.

  8. Optimization of process parameters in drilling of fibre hybrid composite using Taguchi and grey relational analysis

    Science.gov (United States)

    Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.

    2017-03-01

    Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.

  9. Data Mining and Optimization Tools for Developing Engine Parameters Tools

    Science.gov (United States)

    Dhawan, Atam P.

    1998-01-01

    This project was awarded for understanding the problem and developing a plan for Data Mining tools for use in designing and implementing an Engine Condition Monitoring System. Tricia Erhardt and I studied the problem domain for developing an Engine Condition Monitoring system using the sparse and non-standardized datasets to be available through a consortium at NASA Lewis Research Center. We visited NASA three times to discuss additional issues related to dataset which was not made available to us. We discussed and developed a general framework of data mining and optimization tools to extract useful information from sparse and non-standard datasets. These discussions lead to the training of Tricia Erhardt to develop Genetic Algorithm based search programs which were written in C++ and used to demonstrate the capability of GA algorithm in searching an optimal solution in noisy, datasets. From the study and discussion with NASA LeRC personnel, we then prepared a proposal, which is being submitted to NASA for future work for the development of data mining algorithms for engine conditional monitoring. The proposed set of algorithm uses wavelet processing for creating multi-resolution pyramid of tile data for GA based multi-resolution optimal search.

  10. Originate: PC input processor for origen-S

    International Nuclear Information System (INIS)

    Bowman, S.M.

    1994-01-01

    ORIGINATE is a personal computer program developed at Oak Ridge National Laboratory to serve as a user-friendly interface for the ORIGEN-S isotopic generation and depletion code. It is designed to assist an ORIGEN-S user in preparing an input file for execution of light-water-reactor fuel depletion and decay cases. Output from ORIGINATE is a card-image input file that may be uploaded to a mainframe computer to execute ORIGEN-S in SCALE-4. ORIGINATE features a pull down menu system that accesses sophisticated data entry screens. The program allows the user to quickly set up an ORIGEN-S input file and perform error checking. This capability increases productivity and decreases chance of user error. (authors). 6 refs., 3 tabs

  11. Development of 20 kW input power coupler for 1.3 GHz ERL main linac. Component test at 30 kW IOT test stand

    International Nuclear Information System (INIS)

    Sakai, Hiroshi; Umemori, Kensei; Sakanaka, Shogo; Takahashi, Takeshi; Furuya, Takaaki; Shinoe, Kenji; Ishii, Atsushi; Nakamura, Norio; Sawamura, Masaru

    2009-01-01

    We started to develop an input coupler for a 1.3 GHz ERL superconducting cavity. Required input power is about 20 kW for the cavity acceleration field of 20 MV/m and the beam current of 100 mA in energy recovery operation. The input coupler is designed based on the STF-BL input coupler and some modifications are applied to the design for the CW 20 kW power operation. We fabricated input coupler components such as ceramic windows and bellows and carried out the high-power test of the components by using a 30 kW IOT power source and a test stand constructed for the highpower test. In this report, we mainly describe the results of the high-power test of ceramic window and bellows. (author)

  12. A strategy for optimizing item-pool management

    NARCIS (Netherlands)

    Ariel, A.; van der Linden, Willem J.; Veldkamp, Bernard P.

    2006-01-01

    Item-pool management requires a balancing act between the input of new items into the pool and the output of tests assembled from it. A strategy for optimizing item-pool management is presented that is based on the idea of a periodic update of an optimal blueprint for the item pool to tune item

  13. JAGUAR developer's manual.

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Ethan

    2011-06-01

    JAGUAR (JAva GUi for Applied Research) is a Java software tool providing an advanced text editor and graphical user interface (GUI) to manipulate DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) input specifications. This document focuses on the technical background necessary for a developer to understand JAGUAR.

  14. Design and development of bio-inspired framework for reservoir operation optimization

    Science.gov (United States)

    Asvini, M. Sakthi; Amudha, T.

    2017-12-01

    Frameworks for optimal reservoir operation play an important role in the management of water resources and delivery of economic benefits. Effective utilization and conservation of water from reservoirs helps to manage water deficit periods. The main challenge in reservoir optimization is to design operating rules that can be used to inform real-time decisions on reservoir release. We develop a bio-inspired framework for the optimization of reservoir release to satisfy the diverse needs of various stakeholders. In this work, single-objective optimization and multiobjective optimization problems are formulated using an algorithm known as "strawberry optimization" and tested with actual reservoir data. Results indicate that well planned reservoir operations lead to efficient deployment of the reservoir water with the help of optimal release patterns.

  15. Input frequency and lexical variability in phonological development: a survival analysis of word-initial cluster production.

    Science.gov (United States)

    Ota, Mitsuhiko; Green, Sam J

    2013-06-01

    Although it has been often hypothesized that children learn to produce new sound patterns first in frequently heard words, the available evidence in support of this claim is inconclusive. To re-examine this question, we conducted a survival analysis of word-initial consonant clusters produced by three children in the Providence Corpus (0 ; 11-4 ; 0). The analysis took account of several lexical factors in addition to lexical input frequency, including the age of first production, production frequency, neighborhood density and number of phonemes. The results showed that lexical input frequency was a significant predictor of the age at which the accuracy level of cluster production in each word first reached 80%. The magnitude of the frequency effect differed across cluster types. Our findings indicate that some of the between-word variance found in the development of sound production can indeed be attributed to the frequency of words in the child's ambient language.

  16. Development of a method of robust rain gauge network optimization based on intensity-duration-frequency results

    Directory of Open Access Journals (Sweden)

    A. Chebbi

    2013-10-01

    Full Text Available Based on rainfall intensity-duration-frequency (IDF curves, fitted in several locations of a given area, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimization can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables, and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short- and a long-term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2. Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World

  17. BRAIN Journal - Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    OpenAIRE

    Ahmet Demir; Utku Kose

    2016-01-01

    ABSTRACT In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Sc...

  18. Minimum Symbol Error Rate Detection in Single-Input Multiple-Output Channels with Markov Noise

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.

    2005-01-01

    Minimum symbol error rate detection in Single-Input Multiple- Output(SIMO) channels with Markov noise is presented. The special case of zero-mean Gauss-Markov noise is examined closer as it only requires knowledge of the second-order moments. In this special case, it is shown that optimal detection...

  19. Short-Term Forecasting of Electric Loads Using Nonlinear Autoregressive Artificial Neural Networks with Exogenous Vector Inputs

    Directory of Open Access Journals (Sweden)

    Jaime Buitrago

    2017-01-01

    Full Text Available Short-term load forecasting is crucial for the operations planning of an electrical grid. Forecasting the next 24 h of electrical load in a grid allows operators to plan and optimize their resources. The purpose of this study is to develop a more accurate short-term load forecasting method utilizing non-linear autoregressive artificial neural networks (ANN with exogenous multi-variable input (NARX. The proposed implementation of the network is new: the neural network is trained in open-loop using actual load and weather data, and then, the network is placed in closed-loop to generate a forecast using the predicted load as the feedback input. Unlike the existing short-term load forecasting methods using ANNs, the proposed method uses its own output as the input in order to improve the accuracy, thus effectively implementing a feedback loop for the load, making it less dependent on external data. Using the proposed framework, mean absolute percent errors in the forecast in the order of 1% have been achieved, which is a 30% improvement on the average error using feedforward ANNs, ARMAX and state space methods, which can result in large savings by avoiding commissioning of unnecessary power plants. The New England electrical load data are used to train and validate the forecast prediction.

  20. The Effectiveness of Visual Input Enhancement on the Noticing and L2 Development of the Spanish Past Tense

    Science.gov (United States)

    Loewen, Shawn; Inceoglu, Solène

    2016-01-01

    Textual manipulation is a common pedagogic tool used to emphasize specific features of a second language (L2) text, thereby facilitating noticing and, ideally, second language development. Visual input enhancement has been used to investigate the effects of highlighting specific grammatical structures in a text. The current study uses a…

  1. Neutron-deuteron scattering calculations with W-matrix representation of the two-body input

    International Nuclear Information System (INIS)

    Bartnik, E.A.; Haberzettl, H.; Januschke, T.; Kerwath, U.; Sandhas, W.

    1987-05-01

    Employing the W-matrix representation of the partial-wave T matrix introduced by Bartnik, Haberzettl, and Sandhas, we show for the example of the Malfliet-Tjon potentials I and III that the single-term separable part of the W-matrix representation, when used as input in three-nucleon neutron-deuteron scattering calculations, is fully capable of reproducing the exact results obtained by Kloet and Tjon. This approximate two-body input not only satisfies the two-body off-shell unitarity relation but, moreover, it also contains a parameter which may be used in optimizing the three-body data. We present numerical evidence that there exists a variational (minimum) principle for the determination of the three-body binding energy which allows one to choose this parameter also in the absence of an exact reference calculation. Our results for neutron-deuteron scattering show that it is precisely this choice of the parameter which provides optimal scattering data. We conclude that the W-matrix approach, despite its simplicity, is a remarkably efficient tool for high-quality three-nucleon calculations. (orig.)

  2. ColloInputGenerator

    DEFF Research Database (Denmark)

    2013-01-01

    This is a very simple program to help you put together input files for use in Gries' (2007) R-based collostruction analysis program. It basically puts together a text file with a frequency list of lexemes in the construction and inserts a column where you can add the corpus frequencies. It requires...... it as input for basic collexeme collostructional analysis (Stefanowitsch & Gries 2003) in Gries' (2007) program. ColloInputGenerator is, in its current state, based on programming commands introduced in Gries (2009). Projected updates: Generation of complete work-ready frequency lists....

  3. Soft Computing Optimizer For Intelligent Control Systems Design: The Structure And Applications

    Directory of Open Access Journals (Sweden)

    Sergey A. Panfilov

    2003-10-01

    Full Text Available Soft Computing Optimizer (SCO as a new software tool for design of robust intelligent control systems is described. It is based on the hybrid methodology of soft computing and stochastic simulation. It uses as an input the measured or simulated data about the modeled system. SCO is used to design an optimal fuzzy inference system, which approximates a random behavior of control object with the certain accuracy. The task of the fuzzy inference system construction is reduced to the subtasks such as forming of the linguistic variables for each input and output variable, creation of rule data base, optimization of rule data base and refinement of the parameters of the membership functions. Each task by the corresponding genetic algorithm (with an appropriate fitness function is solved. The result of SCO application is the design of Knowledge Base of a Fuzzy Controller, which contains the value information about developed fuzzy inference system. Such value information can be downloaded into the actual fuzzy controller to perform online fuzzy control. Simulations results of robust fuzzy control of nonlinear dynamic systems and experimental results of application on automotive semi-active suspension control are demonstrated.

  4. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest.

    Science.gov (United States)

    Ma, Suliang; Chen, Mingxuan; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-04-16

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods.

  5. Application of artificial neural network to predict the optimal start time for heating system in building

    International Nuclear Information System (INIS)

    Yang, In-Ho; Yeo, Myoung-Souk; Kim, Kwang-Woo

    2003-01-01

    The artificial neural network (ANN) approach is a generic technique for mapping non-linear relationships between inputs and outputs without knowing the details of these relationships. This paper presents an application of the ANN in a building control system. The objective of this study is to develop an optimized ANN model to determine the optimal start time for a heating system in a building. For this, programs for predicting the room air temperature and the learning of the ANN model based on back propagation learning were developed, and learning data for various building conditions were collected through program simulation for predicting the room air temperature using systems of experimental design. Then, the optimized ANN model was presented through learning of the ANN, and its performance to determine the optimal start time was evaluated

  6. Development of optimized segmentation map in dual energy computed tomography

    Science.gov (United States)

    Yamakawa, Keisuke; Ueki, Hironori

    2012-03-01

    Dual energy computed tomography (DECT) has been widely used in clinical practice and has been particularly effective for tissue diagnosis. In DECT the difference of two attenuation coefficients acquired by two kinds of X-ray energy enables tissue segmentation. One problem in conventional DECT is that the segmentation deteriorates in some cases, such as bone removal. This is due to two reasons. Firstly, the segmentation map is optimized without considering the Xray condition (tube voltage and current). If we consider the tube voltage, it is possible to create an optimized map, but unfortunately we cannot consider the tube current. Secondly, the X-ray condition is not optimized. The condition can be set empirically, but this means that the optimized condition is not used correctly. To solve these problems, we have developed methods for optimizing the map (Method-1) and the condition (Method-2). In Method-1, the map is optimized to minimize segmentation errors. The distribution of the attenuation coefficient is modeled by considering the tube current. In Method-2, the optimized condition is decided to minimize segmentation errors depending on tube voltagecurrent combinations while keeping the total exposure constant. We evaluated the effectiveness of Method-1 by performing a phantom experiment under the fixed condition and of Method-2 by performing a phantom experiment under different combinations calculated from the total exposure constant. When Method-1 was followed with Method-2, the segmentation error was reduced from 37.8 to 13.5 %. These results demonstrate that our developed methods can achieve highly accurate segmentation while keeping the total exposure constant.

  7. A self-adaptive thermal switch array for rapid temperature stabilization under various thermal power inputs

    International Nuclear Information System (INIS)

    Geng, Xiaobao; Patel, Pragnesh; Narain, Amitabh; Meng, Dennis Desheng

    2011-01-01

    A self-adaptive thermal switch array (TSA) based on actuation by low-melting-point alloy droplets is reported to stabilize the temperature of a heat-generating microelectromechanical system (MEMS) device at a predetermined range (i.e. the optimal working temperature of the device) with neither a control circuit nor electrical power consumption. When the temperature is below this range, the TSA stays off and works as a thermal insulator. Therefore, the MEMS device can quickly heat itself up to its optimal working temperature during startup. Once this temperature is reached, TSA is automatically turned on to increase the thermal conductance, working as an effective thermal spreader. As a result, the MEMS device tends to stay at its optimal working temperature without complex thermal management components and the associated parasitic power loss. A prototype TSA was fabricated and characterized to prove the concept. The stabilization temperatures under various power inputs have been studied both experimentally and theoretically. Under the increment of power input from 3.8 to 5.8 W, the temperature of the device increased only by 2.5 °C due to the stabilization effect of TSA

  8. EPICS Input/Output Controller (IOC) application developer's guide. APS Release 3.12

    International Nuclear Information System (INIS)

    Kraimer, M.R.

    1994-11-01

    This document describes the core software that resides in an Input/Output Controller (IOC), one of the major components of EPICS. The basic components are: (OPI) Operator Interface; this is a UNIX based workstation which can run various EPICS tools; (IOC) Input/Output Controller; this is a VME/VXI based chassis containing a Motorola 68xxx processor, various I/O modules, and VME modules that provide access to other I/O buses such as GPIB, (LAN), Local Area Network; and this is the communication network which allows the IOCs and OPIs to communicate. Epics provides a software component, Channel Access, which provides network transparent communication between a Channel Access client and an arbitrary number of Channel Access servers

  9. a Method for Preview Vibration Control of Systems Having Forcing Inputs and Rapidly-Switched Dampers

    Science.gov (United States)

    ElBeheiry, E. M.

    1998-07-01

    In a variety of applications, especially in large scale dynamic systems, the mechanization of different vibration control elements in different locations would be decided by limitations placed on the modal vibration of the system and the inherent dynamic coupling between its modes. Also, the quality of vibration control to the economy of producing the whole system would be another trade-off leading to a mix of passive, active and semi-active vibration control elements in one system. This termactiveis limited to externally powered vibration control inputs and the termsemi-activeis limited to rapidly switched dampers. In this article, an optimal preview control method is developed for application to dynamic systems having active and semi-active vibration control elements mechanized at different locations in one system. The system is then a piecewise (bilinear) controller in which two independent sets of control inputs appear additively and multiplicatively. Calculus of variations along with the Hamiltonian approach are employed for the derivation of this method. In essence, it requires the active elements to be ideal force generators and the switched dampers to have the property of on-line variation of the damping characteristics to pre-determined limits. As the dampers switch during operation the whole system's structure differs, and then values of the active forcing inputs are adapted to match these rapid changes. Strictly speaking, each rapidly switched damper has pre-known upper and lower damping levels and it can take on any in-between value. This in-between value is to be determined by the method as long as the damper tracks a pre-known fully active control demand. In every damping state of each semi-active damper the method provides the optimal matching values of the active forcing inputs. The method is shown to have the feature of solving simple standard matrix equations to obtain closed form solutions. A comprehensive 9-DOF tractor semi-trailer model is used

  10. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    Science.gov (United States)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  11. Double input converters for different voltage sources with isolated charger

    Directory of Open Access Journals (Sweden)

    Chalash Sattayarak

    2014-09-01

    Full Text Available This paper presents the double input converters for different voltage input sources with isolated charger coils. This research aims to increase the performance of the battery charger circuit. In the circuit, there are the different voltage levels of input source. The operating modes of the switch in the circuit use the microcontroller to control the battery charge and to control discharge mode automatically when the input voltage sources are lost from the system. The experimental result of this research shows better performance for charging at any time period of the switch, while the voltage input sources work together. Therefore, this research can use and develop to battery charger for present or future.

  12. Anthropogenic phosphorus (P) inputs to a river basin and their impacts on P fluxes along its upstream-downstream continuum

    Science.gov (United States)

    Zhang, Wangshou; Swaney, Dennis; Hong, Bongghi; Howarth, Robert

    2017-04-01

    Phosphorus (P) originating from anthropogenic sources as a pollutant of surface waters has been an environmental issue for decades because of the well-known role of P in eutrophication. Human activities, such as food production and rapid urbanization, have been linked to increased P inputs which are often accompanied by corresponding increases in riverine P export. However, uneven distributions of anthropogenic P inputs along watersheds from the headwaters to downstream reaches can result in significantly different contributions to the riverine P fluxes of a receiving water body. So far, there is still very little scientific understanding of anthropogenic P inputs and their impacts on riverine flux in river reaches along the upstream to downstream continuum. Here, we investigated P budgets in a series of nested watersheds draining into Hongze Lake of China, and developed a simple empirical function to describe the relationship between anthropogenic inputs and riverine TP fluxes. The results indicated that an average of 1.1% of anthropogenic P inputs are exported into rivers, with most of the remainder retained in the watershed landscape over the period studied. Fertilizer application was the main contributor of P loading to the lake (55% of total loads), followed by legacy P stock (30%), food and feed P inputs (12%) and non-food P inputs (4%). From 60% to 89% of the riverine TP loads generated from various locations within this basin were ultimately transported into the receiving lake of the downstream, with an average rate of 1.86 tons P km-1 retaining in the main stem of the inflowing river annually. Our results highlight that in-stream processes can significantly buffer the riverine P loading to the downstream receiving lake. An integrated P management strategy considering the influence of anthropogenic inputs and hydrological interactions is required to assess and optimize P management for protecting fresh waters.

  13. Input/Output of ab-initio nuclear structure calculations for improved performance and portability

    International Nuclear Information System (INIS)

    Laghave, Nikhil

    2010-01-01

    Many modern scientific applications rely on highly computation intensive calculations. However, most applications do not concentrate as much on the role that input/output operations can play for improved performance and portability. Parallelizing input/output operations of large files can significantly improve the performance of parallel applications where sequential I/O is a bottleneck. A proper choice of I/O library also offers a scope for making input/output operations portable across different architectures. Thus, use of parallel I/O libraries for organizing I/O of large data files offers great scope in improving performance and portability of applications. In particular, sequential I/O has been identified as a bottleneck for the highly scalable MFDn (Many Fermion Dynamics for nuclear structure) code performing ab-initio nuclear structure calculations. We develop interfaces and parallel I/O procedures to use a well-known parallel I/O library in MFDn. As a result, we gain efficient I/O of large datasets along with their portability and ease of use in the down-stream processing. Even situations where the amount of data to be written is not huge, proper use of input/output operations can boost the performance of scientific applications. Application checkpointing offers enormous performance improvement and flexibility by doing a negligible amount of I/O to disk. Checkpointing saves and resumes application state in such a manner that in most cases the application is unaware that there has been an interruption to its execution. This helps in saving large amount of work that has been previously done and continue application execution. This small amount of I/O provides substantial time saving by offering restart/resume capability to applications. The need for checkpointing in optimization code NEWUOA has been identified and checkpoint/restart capability has been implemented in NEWUOA by using simple file I/O.

  14. On robust control of uncertain chaotic systems: a sliding-mode synthesis via chaotic optimization

    International Nuclear Information System (INIS)

    Lu Zhao; Shieh Leangsan; Chen GuanRong

    2003-01-01

    This paper presents a novel Lyapunov-based control approach which utilizes a Lyapunov function of the nominal plant for robust tracking control of general multi-input uncertain nonlinear systems. The difficulty of constructing a control Lyapunov function is alleviated by means of predefining an optimal sliding mode. The conventional schemes for constructing sliding modes of nonlinear systems stipulate that the system of interest is canonical-transformable or feedback-linearizable. An innovative approach that exploits a chaotic optimizing algorithm is developed thereby obtaining the optimal sliding manifold for the control purpose. Simulations on the uncertain chaotic Chen's system illustrate the effectiveness of the proposed approach

  15. Systematic Design Method and Experimental Validation of a 2-DOF Compliant Parallel Mechanism with Excellent Input and Output Decoupling Performances

    Directory of Open Access Journals (Sweden)

    Yao Jiang

    2017-06-01

    Full Text Available The output and input coupling characteristics of the compliant parallel mechanism (CPM bring difficulty in the motion control and challenge its high performance and operational safety. This paper presents a systematic design method for a 2-degrees-of-freedom (DOFs CPM with excellent decoupling performance. A symmetric kinematic structure can guarantee a CPM with a complete output decoupling characteristic; input coupling is reduced by resorting to a flexure-based decoupler. This work discusses the stiffness design requirement of the decoupler and proposes a compound flexure hinge as its basic structure. Analytical methods have been derived to assess the mechanical performances of the CPM in terms of input and output stiffness, motion stroke, input coupling degree, and natural frequency. The CPM’s geometric parameters were optimized to minimize the input coupling while ensuring key performance indicators at the same time. The optimized CPM’s performances were then evaluated by using a finite element analysis. Finally, a prototype was constructed and experimental validations were carried out to test the performance of the CPM and verify the effectiveness of the design method. The design procedure proposed in this paper is systematic and can be extended to design the CPMs with other types of motion.

  16. RDS - A systematic approach towards system thermal hydraulics input code development for a comprehensive deterministic safety analysis

    International Nuclear Information System (INIS)

    Salim, Mohd Faiz; Roslan, Ridha; Ibrahim, Mohd Rizal Mamat

    2014-01-01

    Deterministic Safety Analysis (DSA) is one of the mandatory requirements conducted for Nuclear Power Plant licensing process, with the aim of ensuring safety compliance with relevant regulatory acceptance criteria. DSA is a technique whereby a set of conservative deterministic rules and requirements are applied for the design and operation of facilities or activities. Computer codes are normally used to assist in performing all required analysis under DSA. To ensure a comprehensive analysis, the conduct of DSA should follow a systematic approach. One of the methodologies proposed is the Standardized and Consolidated Reference Experimental (and Calculated) Database (SCRED) developed by University of Pisa. Based on this methodology, the use of Reference Data Set (RDS) as a pre-requisite reference document for developing input nodalization was proposed. This paper shall describe the application of RDS with the purpose of assessing its effectiveness. Two RDS documents were developed for an Integral Test Facility of LOBI-MOD2 and associated Test A1-83. Data and information from various reports and drawings were referred in preparing the RDS. The results showed that by developing RDS, it has made possible to consolidate all relevant information in one single document. This is beneficial as it enables preservation of information, promotes quality assurance, allows traceability, facilitates continuous improvement, promotes solving of contradictions and finally assisting in developing thermal hydraulic input regardless of whichever code selected. However, some disadvantages were also recognized such as the need for experience in making engineering judgments, language barrier in accessing foreign information and limitation of resources. Some possible improvements are suggested to overcome these challenges

  17. RDS - A systematic approach towards system thermal hydraulics input code development for a comprehensive deterministic safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Salim, Mohd Faiz, E-mail: mohdfaizs@tnb.com.my [Nuclear Energy Department, Tenaga Nasional Berhad, Level 32, Dua Sentral, 50470 Kuala Lumpur (Malaysia); Roslan, Ridha [Nuclear Installation Division, Atomic Energy Licensing Board, Batu 24, Jalan Dengkil, 43800 Dengkil, Selangor (Malaysia); Ibrahim, Mohd Rizal Mamat [Technical Support Division, Malaysian Nuclear Agency, Bangi, 43000 Kajang, Selangor (Malaysia)

    2014-02-12

    Deterministic Safety Analysis (DSA) is one of the mandatory requirements conducted for Nuclear Power Plant licensing process, with the aim of ensuring safety compliance with relevant regulatory acceptance criteria. DSA is a technique whereby a set of conservative deterministic rules and requirements are applied for the design and operation of facilities or activities. Computer codes are normally used to assist in performing all required analysis under DSA. To ensure a comprehensive analysis, the conduct of DSA should follow a systematic approach. One of the methodologies proposed is the Standardized and Consolidated Reference Experimental (and Calculated) Database (SCRED) developed by University of Pisa. Based on this methodology, the use of Reference Data Set (RDS) as a pre-requisite reference document for developing input nodalization was proposed. This paper shall describe the application of RDS with the purpose of assessing its effectiveness. Two RDS documents were developed for an Integral Test Facility of LOBI-MOD2 and associated Test A1-83. Data and information from various reports and drawings were referred in preparing the RDS. The results showed that by developing RDS, it has made possible to consolidate all relevant information in one single document. This is beneficial as it enables preservation of information, promotes quality assurance, allows traceability, facilitates continuous improvement, promotes solving of contradictions and finally assisting in developing thermal hydraulic input regardless of whichever code selected. However, some disadvantages were also recognized such as the need for experience in making engineering judgments, language barrier in accessing foreign information and limitation of resources. Some possible improvements are suggested to overcome these challenges.

  18. Development of a Multi-Event Trajectory Optimization Tool for Noise-Optimized Approach Route Design

    NARCIS (Netherlands)

    Braakenburg, M.L.; Hartjes, S.; Visser, H.G.; Hebly, S.J.

    2011-01-01

    This paper presents preliminary results from an ongoing research effort towards the development of a multi-event trajectory optimization methodology that allows to synthesize RNAV approach routes that minimize a cumulative measure of noise, taking into account the total noise effect aggregated for

  19. Robust structural optimization using Gauss-type quadrature formula

    International Nuclear Information System (INIS)

    Lee, Sang Hoon; Seo, Ki Seog; Chen, Shikui; Chen, Wei

    2009-01-01

    In robust design, the mean and variance of design performance are frequently used to measure the design performance and its robustness under uncertainties. In this paper, we present the Gauss-type quadrature formula as a rigorous method for mean and variance estimation involving arbitrary input distributions and further extend its use to robust design optimization. One dimensional Gauss-type quadrature formula are constructed from the input probability distributions and utilized in the construction of multidimensional quadrature formula such as the Tensor Product Quadrature (TPQ) formula and the Univariate Dimension Reduction (UDR) method. To improve the efficiency of using it for robust design optimization, a semi-analytic design sensitivity analysis with respect to the statistical moments is proposed. The proposed approach is applied to a simple bench mark problems and robust topology optimization of structures considering various types of uncertainty.

  20. Robust structural optimization using Gauss-type quadrature formula

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Hoon; Seo, Ki Seog [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Chen, Shikui; Chen, Wei [Northwestern University, Illinois (United States)

    2009-07-01

    In robust design, the mean and variance of design performance are frequently used to measure the design performance and its robustness under uncertainties. In this paper, we present the Gauss-type quadrature formula as a rigorous method for mean and variance estimation involving arbitrary input distributions and further extend its use to robust design optimization. One dimensional Gauss-type quadrature formula are constructed from the input probability distributions and utilized in the construction of multidimensional quadrature formula such as the Tensor Product Quadrature (TPQ) formula and the Univariate Dimension Reduction (UDR) method. To improve the efficiency of using it for robust design optimization, a semi-analytic design sensitivity analysis with respect to the statistical moments is proposed. The proposed approach is applied to a simple bench mark problems and robust topology optimization of structures considering various types of uncertainty.

  1. Robust Structural Optimization Using Gauss-type Quadrature Formula

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Hoon; Seo, Ki Seog; Chen, Shikui; Chen, Wei [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2009-08-15

    In robust design, the mean and variance of design performance are frequently used to measure the design performance and its robustness under uncertainties. In this paper, we present the Gauss-type quadrature formula as a rigorous method for mean and variance estimation involving arbitrary input distributions and further extend its use to robust design optimization. One dimensional Gauss-type quadrature formula are constructed from the input probability distributions and utilized in the construction of multidimensional quadrature formula such as the tensor product quadrature (TPQ) formula and the univariate dimension reduction (UDR) method. To improve the efficiency of using it for robust design optimization, a semi-analytic design sensitivity analysis with respect to the statistical moments is proposed. The proposed approach is applied to a simple bench mark problems and robust topology optimization of structures considering various types of uncertainty.

  2. Robust Structural Optimization Using Gauss-type Quadrature Formula

    International Nuclear Information System (INIS)

    Lee, Sang Hoon; Seo, Ki Seog; Chen, Shikui; Chen, Wei

    2009-01-01

    In robust design, the mean and variance of design performance are frequently used to measure the design performance and its robustness under uncertainties. In this paper, we present the Gauss-type quadrature formula as a rigorous method for mean and variance estimation involving arbitrary input distributions and further extend its use to robust design optimization. One dimensional Gauss-type quadrature formula are constructed from the input probability distributions and utilized in the construction of multidimensional quadrature formula such as the tensor product quadrature (TPQ) formula and the univariate dimension reduction (UDR) method. To improve the efficiency of using it for robust design optimization, a semi-analytic design sensitivity analysis with respect to the statistical moments is proposed. The proposed approach is applied to a simple bench mark problems and robust topology optimization of structures considering various types of uncertainty

  3. A guidance on MELCOR input preparation : An input deck for Ul-Chin 3 and 4 Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Song Won

    1997-02-01

    The objective of this study is to enhance the capability of assessing the severe accident sequence analyses and the containment behavior using MELCOR computer code and to provide the guideline of its efficient use. This report shows the method of the input deck preparation as well as the assessment strategy for the MELCOR code. MELCOR code is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. The code is being developed at Sandia National Laboratories for the U.S. NRC as a second generation plant risk assessment tool and the successor to the source term code package. The accident sequence of the reference input deck prepared in this study for Ulchin unit 3 and 4 nuclear power plants, is the total loss of feedwater (TLOFW) without any success of safety systems, which is similar to station blackout (TLMB). It is very useful to simulate a well-known sequence through the best estimated code or experiment, because the results of the simulation before core melt can be compared with the FSAR, but no data is available after core melt. The precalculation for the TLOFW using the reference input deck is performed successfully as expected. The other sequences will be carried out with minor changes in the reference input. This input deck will be improved continually by the adding of the safety systems not included in this input deck, and also through the sensitivity and uncertainty analyses. (author). 19 refs., 10 tabs., 55 figs.

  4. A guidance on MELCOR input preparation : An input deck for Ul-Chin 3 and 4 Nuclear Power Plant

    International Nuclear Information System (INIS)

    Cho, Song Won.

    1997-02-01

    The objective of this study is to enhance the capability of assessing the severe accident sequence analyses and the containment behavior using MELCOR computer code and to provide the guideline of its efficient use. This report shows the method of the input deck preparation as well as the assessment strategy for the MELCOR code. MELCOR code is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. The code is being developed at Sandia National Laboratories for the U.S. NRC as a second generation plant risk assessment tool and the successor to the source term code package. The accident sequence of the reference input deck prepared in this study for Ulchin unit 3 and 4 nuclear power plants, is the total loss of feedwater (TLOFW) without any success of safety systems, which is similar to station blackout (TLMB). It is very useful to simulate a well-known sequence through the best estimated code or experiment, because the results of the simulation before core melt can be compared with the FSAR, but no data is available after core melt. The precalculation for the TLOFW using the reference input deck is performed successfully as expected. The other sequences will be carried out with minor changes in the reference input. This input deck will be improved continually by the adding of the safety systems not included in this input deck, and also through the sensitivity and uncertainty analyses. (author). 19 refs., 10 tabs., 55 figs

  5. Multi-Input Convolutional Neural Network for Flower Grading

    Directory of Open Access Journals (Sweden)

    Yu Sun

    2017-01-01

    Full Text Available Flower grading is a significant task because it is extremely convenient for managing the flowers in greenhouse and market. With the development of computer vision, flower grading has become an interdisciplinary focus in both botany and computer vision. A new dataset named BjfuGloxinia contains three quality grades; each grade consists of 107 samples and 321 images. A multi-input convolutional neural network is designed for large scale flower grading. Multi-input CNN achieves a satisfactory accuracy of 89.6% on the BjfuGloxinia after data augmentation. Compared with a single-input CNN, the accuracy of multi-input CNN is increased by 5% on average, demonstrating that multi-input convolutional neural network is a promising model for flower grading. Although data augmentation contributes to the model, the accuracy is still limited by lack of samples diversity. Majority of misclassification is derived from the medium class. The image processing based bud detection is useful for reducing the misclassification, increasing the accuracy of flower grading to approximately 93.9%.

  6. Generalized DSS shell for developing simulation and optimization hydro-economic models of complex water resources systems

    Science.gov (United States)

    Pulido-Velazquez, Manuel; Lopez-Nicolas, Antonio; Harou, Julien J.; Andreu, Joaquin

    2013-04-01

    Hydrologic-economic models allow integrated analysis of water supply, demand and infrastructure management at the river basin scale. These models simultaneously analyze engineering, hydrology and economic aspects of water resources management. Two new tools have been designed to develop models within this approach: a simulation tool (SIM_GAMS), for models in which water is allocated each month based on supply priorities to competing uses and system operating rules, and an optimization tool (OPT_GAMS), in which water resources are allocated optimally following economic criteria. The characterization of the water resource network system requires a connectivity matrix representing the topology of the elements, generated using HydroPlatform. HydroPlatform, an open-source software platform for network (node-link) models, allows to store, display and export all information needed to characterize the system. Two generic non-linear models have been programmed in GAMS to use the inputs from HydroPlatform in simulation and optimization models. The simulation model allocates water resources on a monthly basis, according to different targets (demands, storage, environmental flows, hydropower production, etc.), priorities and other system operating rules (such as reservoir operating rules). The optimization model's objective function is designed so that the system meets operational targets (ranked according to priorities) each month while following system operating rules. This function is analogous to the one used in the simulation module of the DSS AQUATOOL. Each element of the system has its own contribution to the objective function through unit cost coefficients that preserve the relative priority rank and the system operating rules. The model incorporates groundwater and stream-aquifer interaction (allowing conjunctive use simulation) with a wide range of modeling options, from lumped and analytical approaches to parameter-distributed models (eigenvalue approach). Such

  7. MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER

    Science.gov (United States)

    Barton, R. S.

    1994-01-01

    The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the

  8. Development of a simulation program to study error propagation in the reprocessing input accountancy measurements

    International Nuclear Information System (INIS)

    Sanfilippo, L.

    1987-01-01

    A physical model and a computer program have been developed to simulate all the measurement operations involved with the Isotopic Dilution Analysis technique currently applied in the Volume - Concentration method for the Reprocessing Input Accountancy, together with their errors or uncertainties. The simulator is apt to easily solve a number of problems related to the measurement sctivities of the plant operator and the inspector. The program, written in Fortran 77, is based on a particular Montecarlo technique named ''Random Sampling''; a full description of the code is reported

  9. Optimal Hankel Norm Model Reduction by Truncation of Trajectories

    NARCIS (Netherlands)

    Roorda, B.; Weiland, S.

    2000-01-01

    We show how optimal Hankel-norm approximations of dynamical systems allow for a straightforward interpretation in terms of system trajectories. It is shown that for discrete time single-input systems optimal reductions are obtained by cutting 'balanced trajectories', i.e., by disconnecting the past

  10. Jointness through vessel capacity input in a multispecies fishery

    DEFF Research Database (Denmark)

    Hansen, Lars Gårn; Jensen, Carsten Lynge

    2014-01-01

    capacity. We develop a fixed but allocatable input model of purse seine fisheries capturing this particular type of jointness. We estimate the model for the Norwegian purse seine fishery and find that it is characterized by nonjointness, while estimations for this fishery using the standard models imply...... are typically modeled as either independent single species fisheries or using standard multispecies functional forms characterized by jointness in inputs. We argue that production of each species is essentially independent but that jointness may be caused by competition for fixed but allocable input of vessel...

  11. Soft sensor development and optimization of the commercial petrochemical plant integrating support vector regression and genetic algorithm

    Directory of Open Access Journals (Sweden)

    S.K. Lahiri

    2009-09-01

    Full Text Available Soft sensors have been widely used in the industrial process control to improve the quality of the product and assure safety in the production. The core of a soft sensor is to construct a soft sensing model. This paper introduces support vector regression (SVR, a new powerful machine learning methodbased on a statistical learning theory (SLT into soft sensor modeling and proposes a new soft sensing modeling method based on SVR. This paper presents an artificial intelligence based hybrid soft sensormodeling and optimization strategies, namely support vector regression – genetic algorithm (SVR-GA for modeling and optimization of mono ethylene glycol (MEG quality variable in a commercial glycol plant. In the SVR-GA approach, a support vector regression model is constructed for correlating the process data comprising values of operating and performance variables. Next, model inputs describing the process operating variables are optimized using genetic algorithm with a view to maximize the process performance. The SVR-GA is a new strategy for soft sensor modeling and optimization. The major advantage of the strategies is that modeling and optimization can be conducted exclusively from the historic process data wherein the detailed knowledge of process phenomenology (reaction mechanism, kinetics etc. is not required. Using SVR-GA strategy, a number of sets of optimized operating conditions were found. The optimized solutions, when verified in an actual plant, resulted in a significant improvement in the quality.

  12. The Impact of Bimodal Bilingual Parental Input on the Communication and Language Development of a Young Deaf Child

    Science.gov (United States)

    Levesque, Elizabeth; Brown, P. Margaret; Wigglesworth, Gillian

    2014-01-01

    This study explores the impact of bimodal bilingual parental input on the communication and language development of a young deaf child. The participants in this case study were a severe-to-profoundly deaf boy and his hearing parents, who were enrolled in a bilingual (English and Australian Sign Language) homebased early intervention programme. The…

  13. Heat input control in coke ovens battery using artificial intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, R.; Kannan, C.; Sistla, S.; Kumar, D. [Tata Steel, Jamshedpur (India)

    2005-07-01

    Controlled heating is very essential for producing coke with certain desired properties. Controlled heating involves controlling the heat input into the battery dynamically depending on the various process parameters like current battery temperature, the set point of battery temperature, moisture in coal, ambient temperature, coal fineness, cake breakage etc. An artificial intelligence (AI) based heat input control has been developed in which currently some of the above mentioned process parameters are considered and used for calculating the pause time which is applied between reversal during the heating process. The AI based model currently considers 3 input variables, temperature deviation history, current deviation of the battery temperature from the target temperature and the actual heat input into the battery. Work is in progress to control the standard deviation of coke end temperature using this model. The new system which has been developed in-house has replaced Hoogovens supplied model. 7 figs.

  14. Optimizing graph algorithms on pregel-like systems

    KAUST Repository

    Salihoglu, Semih

    2014-03-01

    We study the problem of implementing graph algorithms efficiently on Pregel-like systems, which can be surprisingly challenging. Standard graph algorithms in this setting can incur unnecessary inefficiencies such as slow convergence or high communication or computation cost, typically due to structural properties of the input graphs such as large diameters or skew in component sizes. We describe several optimization techniques to address these inefficiencies. Our most general technique is based on the idea of performing some serial computation on a tiny fraction of the input graph, complementing Pregel\\'s vertex-centric parallelism. We base our study on thorough implementations of several fundamental graph algorithms, some of which have, to the best of our knowledge, not been implemented on Pregel-like systems before. The algorithms and optimizations we describe are fully implemented in our open-source Pregel implementation. We present detailed experiments showing that our optimization techniques improve runtime significantly on a variety of very large graph datasets.

  15. Observer-Based Perturbation Extremum Seeking Control with Input Constraints for Direct-Contact Membrane Distillation Process

    KAUST Repository

    Eleiwi, Fadi

    2017-05-08

    An Observer-based Perturbation Extremum Seeking Control (PESC) is proposed for a Direct-Contact Membrane Distillation (DCMD) process. The process is described with a dynamic model that is based on a 2D Advection-Diffusion Equation (ADE) model which has pump flow rates as process inputs. The objective of the controller is to optimize the trade-off between the permeate mass flux and the energy consumption by the pumps inside the process. Cases of single and multiple control inputs are considered through the use of only the feed pump flow rate or both the feed and the permeate pump flow rates. A nonlinear Lyapunov-based observer is designed to provide an estimation for the temperature distribution all over the designated domain of the DCMD process. Moreover, control inputs are constrained with an anti-windup technique to be within feasible and physical ranges. Performance of the proposed structure is analyzed, and simulations based on real DCMD process parameters for each control input are provided.

  16. Some Insights of Spectral Optimization in Ocean Color Inversion

    Science.gov (United States)

    Lee, Zhongping; Franz, Bryan; Shang, Shaoling; Dong, Qiang; Arnone, Robert

    2011-01-01

    In the past decades various algorithms have been developed for the retrieval of water constituents from the measurement of ocean color radiometry, and one of the approaches is spectral optimization. This approach defines an error target (or error function) between the input remote sensing reflectance and the output remote sensing reflectance, with the latter modeled with a few variables that represent the optically active properties (such as the absorption coefficient of phytoplankton and the backscattering coefficient of particles). The values of the variables when the error reach a minimum (optimization is achieved) are considered the properties that form the input remote sensing reflectance; or in other words, the equations are solved numerically. The applications of this approach implicitly assume that the error is a monotonic function of the various variables. Here, with data from numerical simulation and field measurements, we show the shape of the error surface, in a way to justify the possibility of finding a solution of the various variables. In addition, because the spectral properties could be modeled differently, impacts of such differences on the error surface as well as on the retrievals are also presented.

  17. Specimen Test of Large-Heat-Input Fusion Welding Method for Use of SM570TMCP

    Directory of Open Access Journals (Sweden)

    Dongkyu Lee

    2015-01-01

    Full Text Available In this research, the large-heat-input welding conditions optimized to use the rear plate and the high-performance steel of SM570TMCP, a new kind of steel suitable for the requirements of prospective customers, are proposed. The goal of this research is to contribute to securing the welding fabrication optimized to use the high-strength steel and rear steel plates in the field of construction industry in the future. This research is judged to contribute to securing the welding fabrication optimized to use the high-strength steel and rear steel plates in the field of construction industry in the future.

  18. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  19. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception

  20. Enhancement of Stochastic Resonance Using Optimization Theory

    National Research Council Canada - National Science Library

    Wu, Xingxing; Jiang, Zhong-Ping; Repperger, Daniel W; Guo, Yi

    2006-01-01

    .... The further improvement of the maximal normalized power norm of the bistable double-well dynamic system with white Gaussian noise input can be converted to an optimization problem with constraints...

  1. Iterative learning control an optimization paradigm

    CERN Document Server

    Owens, David H

    2016-01-01

    This book develops a coherent theoretical approach to algorithm design for iterative learning control based on the use of optimization concepts. Concentrating initially on linear, discrete-time systems, the author gives the reader access to theories based on either signal or parameter optimization. Although the two approaches are shown to be related in a formal mathematical sense, the text presents them separately because their relevant algorithm design issues are distinct and give rise to different performance capabilities. Together with algorithm design, the text demonstrates that there are new algorithms that are capable of incorporating input and output constraints, enable the algorithm to reconfigure systematically in order to meet the requirements of different reference signals and also to support new algorithms for local convergence of nonlinear iterative control. Simulation and application studies are used to illustrate algorithm properties and performance in systems like gantry robots and other elect...

  2. Development and optimization of a self-microemulsifying drug delivery system for atorvastatin calcium by using D-optimal mixture design.

    Science.gov (United States)

    Yeom, Dong Woo; Song, Ye Seul; Kim, Sung Rae; Lee, Sang Gon; Kang, Min Hyung; Lee, Sangkil; Choi, Young Wook

    2015-01-01

    In this study, we developed and optimized a self-microemulsifying drug delivery system (SMEDDS) formulation for improving the dissolution and oral absorption of atorvastatin calcium (ATV), a poorly water-soluble drug. Solubility and emulsification tests were performed to select a suitable combination of oil, surfactant, and cosurfactant. A D-optimal mixture design was used to optimize the concentration of components used in the SMEDDS formulation for achieving excellent physicochemical characteristics, such as small droplet size and high dissolution. The optimized ATV-loaded SMEDDS formulation containing 7.16% Capmul MCM (oil), 48.25% Tween 20 (surfactant), and 44.59% Tetraglycol (cosurfactant) significantly enhanced the dissolution rate of ATV in different types of medium, including simulated intestinal fluid, simulated gastric fluid, and distilled water, compared with ATV suspension. Good agreement was observed between predicted and experimental values for mean droplet size and percentage of the drug released in 15 minutes. Further, pharmacokinetic studies in rats showed that the optimized SMEDDS formulation considerably enhanced the oral absorption of ATV, with 3.4-fold and 4.3-fold increases in the area under the concentration-time curve and time taken to reach peak plasma concentration, respectively, when compared with the ATV suspension. Thus, we successfully developed an optimized ATV-loaded SMEDDS formulation by using the D-optimal mixture design, that could potentially be used for improving the oral absorption of poorly water-soluble drugs.

  3. A new approach to nuclear reactor design optimization using genetic algorithms and regression analysis

    International Nuclear Information System (INIS)

    Kumar, Akansha; Tsvetkov, Pavel V.

    2015-01-01

    Highlights: • This paper presents a new method useful for the optimization of complex dynamic systems. • The method uses the strengths of; genetic algorithms (GA), and regression splines. • The method is applied to the design of a gas cooled fast breeder reactor design. • Tools like Java, R, and codes like MCNP, Matlab are used in this research. - Abstract: A module based optimization method using genetic algorithms (GA), and multivariate regression analysis has been developed to optimize a set of parameters in the design of a nuclear reactor. GA simulates natural evolution to perform optimization, and is widely used in recent times by the scientific community. The GA fits a population of random solutions to the optimized solution of a specific problem. In this work, we have developed a genetic algorithm to determine the values for a set of nuclear reactor parameters to design a gas cooled fast breeder reactor core including a basis thermal–hydraulics analysis, and energy transfer. Multivariate regression is implemented using regression splines (RS). Reactor designs are usually complex and a simulation needs a significantly large amount of time to execute, hence the implementation of GA or any other global optimization techniques is not feasible, therefore we present a new method of using RS in conjunction with GA. Due to using RS, we do not necessarily need to run the neutronics simulation for all the inputs generated from the GA module rather, run the simulations for a predefined set of inputs, build a multivariate regression fit to the input and the output parameters, and then use this fit to predict the output parameters for the inputs generated by GA. The reactor parameters are given by the, radius of a fuel pin cell, isotopic enrichment of the fissile material in the fuel, mass flow rate of the coolant, and temperature of the coolant at the core inlet. And, the optimization objectives for the reactor core are, high breeding of U-233 and Pu-239 in

  4. Research on Power Factor Correction Boost Inductor Design Optimization – Efficiency vs. Power Density

    DEFF Research Database (Denmark)

    Li, Qingnan; Andersen, Michael A. E.; Thomsen, Ole Cornelius

    2011-01-01

    Nowadays, efficiency and power density are the most important issues for Power Factor Correction (PFC) converters development. However, it is a challenge to reach both high efficiency and power density in a system at the same time. In this paper, taking a Bridgeless PFC (BPFC) as an example......, a useful compromise between efficiency and power density of the Boost inductors on 3.2kW is achieved using an optimized design procedure. The experimental verifications based on the optimized inductor are carried out from 300W to 3.2kW at 220Vac input....

  5. Total dose induced increase in input offset voltage in JFET input operational amplifiers

    International Nuclear Information System (INIS)

    Pease, R.L.; Krieg, J.; Gehlhausen, M.; Black, J.

    1999-01-01

    Four different types of commercial JFET input operational amplifiers were irradiated with ionizing radiation under a variety of test conditions. All experienced significant increases in input offset voltage (Vos). Microprobe measurement of the electrical characteristics of the de-coupled input JFETs demonstrates that the increase in Vos is a result of the mismatch of the degraded JFETs. (authors)

  6. Nuclear reaction inputs based on effective interactions

    Energy Technology Data Exchange (ETDEWEB)

    Hilaire, S.; Peru, S.; Dubray, N.; Dupuis, M.; Bauge, E. [CEA, DAM, DIF, Arpajon (France); Goriely, S. [Universite Libre de Bruxelles, Institut d' Astronomie et d' Astrophysique, CP-226, Brussels (Belgium)

    2016-11-15

    Extensive nuclear structure studies have been performed for decades using effective interactions as sole input. They have shown a remarkable ability to describe rather accurately many types of nuclear properties. In the early 2000 s, a major effort has been engaged to produce nuclear reaction input data out of the Gogny interaction, in order to challenge its quality also with respect to nuclear reaction observables. The status of this project, well advanced today thanks to the use of modern computers as well as modern nuclear reaction codes, is reviewed and future developments are discussed. (orig.)

  7. Sound effects: Multimodal input helps infants find displaced objects.

    Science.gov (United States)

    Shinskey, Jeanne L

    2017-09-01

    sensitive to bimodal input as multisensory functions develop across the first year. © 2016 The British Psychological Society.

  8. Energy Consumptions of Text Input Methods on Smartphones

    OpenAIRE

    Obison, Henry; Ajuorah, Chiagozie

    2013-01-01

    Mobile computing devices, in particular smartphones are powered from Lithium-ion batteries, which are limited in capacity. With the increasing popularity of mobile systems, various text input methods have been developed to improve user experience and performance. Briefly, text input method is a user interface that can be used to compose an electronic mail, configure mobile Virtual Private Network, and carryout bank transactions and online purchases. Efficient energy management in these system...

  9. Adaptive optimal stochastic state feedback control of resistive wall modes in tokamaks

    International Nuclear Information System (INIS)

    Sun, Z.; Sen, A.K.; Longman, R.W.

    2006-01-01

    An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least-square method with exponential forgetting factor and covariance resetting is used to identify (experimentally determine) the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time-dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used

  10. Optimal gasoline tax in developing, oil-producing countries: The case of Mexico

    International Nuclear Information System (INIS)

    Antón-Sarabia, Arturo; Hernández-Trillo, Fausto

    2014-01-01

    This paper uses the methodology of Parry and Small (2005) to estimate the optimal gasoline tax for a less-developed oil-producing country. The relevance of the estimation relies on the differences between less-developed countries (LDCs) and industrial countries. We argue that lawless roads, general subsidies on gasoline, poor mass transportation systems, older vehicle fleets and unregulated city growth make the tax rates in LDCs differ substantially from the rates in the developed world. We find that the optimal gasoline tax is $1.90 per gallon at 2011 prices and show that the estimate differences are in line with the factors hypothesized. In contrast to the existing literature on industrial countries, we show that the relative gasoline tax incidence may be progressive in Mexico and, more generally, in LDCs. - Highlights: • We estimate the optimal gasoline tax for a typical less-developed, oil-producing country like Mexico. • The relevance of the estimation relies on the differences between less-developed and industrial countries. • The optimal gasoline tax is $1.90 per gallon at 2011 prices. • Distance-related pollution damages, accident costs and gas subsidies account for the major differences. • Gasoline tax incidence may be progressive in less developed countries

  11. Anthropogenic Phosphorus Inputs to a River Basin and Their Impacts on Phosphorus Fluxes Along Its Upstream-Downstream Continuum

    Science.gov (United States)

    Zhang, Wangshou; Swaney, Dennis P.; Hong, Bongghi; Howarth, Robert W.

    2017-12-01

    The increasing trend in riverine phosphorus (P) loads resulting from anthropogenic inputs has gained wide attention because of the well-known role of P in eutrophication. So far, however, there is still limited scientific understanding of anthropogenic P inputs and their impacts on riverine flux in river reaches along the upstream-to-downstream continuum. Here we investigated P budgets in a series of nested watersheds draining into Hongze Lake of China and developed an empirical function to describe the relationship between anthropogenic inputs and riverine P fluxes. Our results indicated that there are obvious gradients regarding P budgets in response to changes in human activities. Fertilizer application and food and feed P import was always the dominant source of P inputs in all sections, followed by nonfood P. Further interpretation using the model revealed the processes of P loading to the lake. About 2%-9% of anthropogenic P inputs are transported from the various sections into the corresponding tributaries of the river systems, depending upon local precipitation rates. Of this amount, around 41%-95% is delivered to the main stem of the Huai River after in-stream attenuation in its tributaries. Ultimately, 55%-86% of the P loads delivered to different locations of the main stem are transported into the receiving lake of the downstream, due to additional losses in the main stem. An integrated P management strategy that considers the gradients of P loss along the upstream-to-downstream continuum is required to assess and optimize P management to protect the region's freshwater resource.

  12. Optimization of MR fluid Yield stress using Taguchi Method and Response Surface Methodology Techniques

    Science.gov (United States)

    Mangal, S. K.; Sharma, Vivek

    2018-02-01

    Magneto rheological fluids belong to a class of smart materials whose rheological characteristics such as yield stress, viscosity etc. changes in the presence of applied magnetic field. In this paper, optimization of MR fluid constituents is obtained with on-state yield stress as response parameter. For this, 18 samples of MR fluids are prepared using L-18 Orthogonal Array. These samples are experimentally tested on a developed & fabricated electromagnet setup. It has been found that the yield stress of MR fluid mainly depends on the volume fraction of the iron particles and type of carrier fluid used in it. The optimal combination of the input parameters for the fluid are found to be as Mineral oil with a volume percentage of 67%, iron powder of 300 mesh size with a volume percentage of 32%, oleic acid with a volume percentage of 0.5% and tetra-methyl-ammonium-hydroxide with a volume percentage of 0.7%. This optimal combination of input parameters has given the on-state yield stress as 48.197 kPa numerically. An experimental confirmation test on the optimized MR fluid sample has been then carried out and the response parameter thus obtained has found matching quite well (less than 1% error) with the numerically obtained values.

  13. Jointness through fishing days input in a multi-species fishery

    DEFF Research Database (Denmark)

    Hansen, Lars Gårn; Jensen, Carsten Lynge

    .g. translog, normalized quadratic). In this paper we argue that jointness in the latter, essentially separable fishery is caused by allocation of fishing days input among harvested species. We developed a structural model of a multi-species fishery where the allocation of fishing days input causes production...

  14. VALORAGUA: A model for the optimal operating strategy of mixed hydrothermal generating systems

    International Nuclear Information System (INIS)

    1992-01-01

    To provide assistance to its developing Member States in carrying out integrated power system expansion analysis, the International Atomic Energy Agency (IAEA) has developed the computer model called WASP (Wien Automatic System Planning Package). The WASP model has proven to be very useful for this purpose and is accepted worldwide as a sound tool for electricity planning. Notwithstanding its many advantages, certain shortcomings of the methodology have been noticed, in particular with regard to representation of hydroelectric power plants. In order to overcome these shortcomings, the IAEA decided to acquire the computer model called VALORAGUA, developed by the Electricidade de Portugal (EDP), for optimizing the operating strategy of a mixed hydro-thermal power system. This program, when used together with WASP, would allow economic optimization of hydro-thermal power systems with a large hydro component. The objective of the present document is to assist in the use of the VALORAGUA model and its auxiliary codes, as well as to clarify the interconnection between VALORAGUA and the WASP-III model. This report is organized into five main chapters. The first chapter serves as an introduction to all remaining chapters. Chapter 2 defines the input data needed for every component of the electric power system. Chapter 3 presents the output variables of the model within the standard output tables that can be produced by VALORAGUA. Chapter 4 describes in detail all the input data needed by each program. It also includes the list of computer input data corresponding to the example described in Chapter 5, which is used to illustrate the execution of the VALORAGUA modules. Description of how to prepare the hydro data for the WASP-III model from the results obtained with the VALORAGUA model is given in Appendix A. Some auxiliary programs of the VALORAGUA model system, developed by EDP to help the user with the input data preparation, are described in Appendix B. Refs, figs and

  15. Optimization of Classical Hydraulic Engine Mounts Based on RMS Method

    Directory of Open Access Journals (Sweden)

    J. Christopherson

    2005-01-01

    Full Text Available Based on RMS averaging of the frequency response functions of the absolute acceleration and relative displacement transmissibility, optimal parameters describing the hydraulic engine mount are determined to explain the internal mount geometry. More specifically, it is shown that a line of minima exists to define a relationship between the absolute acceleration and relative displacement transmissibility of a sprung mass using a hydraulic mount as a means of suspension. This line of minima is used to determine several optimal systems developed on the basis of different clearance requirements, hence different relative displacement requirements, and compare them by means of their respective acceleration and displacement transmissibility functions. In addition, the transient response of the mount to a step input is also investigated to show the effects of the optimization upon the time domain response of the hydraulic mount.

  16. Flow Rates in Liquid Chromatography, Gas Chromatography and Supercritical Fluid Chromatography: A Tool for Optimization

    Directory of Open Access Journals (Sweden)

    Joris Meurs

    2016-08-01

    Full Text Available This paper aimed to develop a standalone application for optimizing flow rates in liquid chromatography (LC, gas chromatography (GC and supercritical fluid chromatography (SFC. To do so, Van Deemter’s equation, Knox’ equation and Golay’s equation were implemented in a MATLAB script and subsequently a graphical user interface (GUI was created. The application will show the optimal flow rate or linear velocity and the corresponding plate height for the set input parameters. Furthermore, a plot will be shown in which the plate height is plotted against the linear flow velocity. Hence, this application will give optimized flow rates for any set conditions with minimal effort.

  17. RDS; A systematic approach towards system thermal hydraulics input code development for a comprehensive deterministic safety analysis

    International Nuclear Information System (INIS)

    Mohd Faiz Salim; Ridha Roslan; Mohd Rizal Mamat

    2013-01-01

    Full-text: Deterministic Safety Analysis (DSA) is one of the mandatory requirements conducted for Nuclear Power Plant licensing process, with the aim of ensuring safety compliance with relevant regulatory acceptance criteria. DSA is a technique whereby a set of conservative deterministic rules and requirements are applied for the design and operation of facilities or activities. Computer codes are normally used to assist in performing all required analysis under DSA. To ensure a comprehensive analysis, the conduct of DSA should follow a systematic approach. One of the methodologies proposed is the Standardized and Consolidated Reference Experimental (and Calculated) Database (SCRED) developed by University of Pisa. Based on this methodology, the use of Reference Data Set (RDS) as a pre-requisite reference document for developing input nodalization was proposed. This paper shall describe the application of RDS with the purpose of assessing its effectiveness. Two RDS documents were developed for an Integral Test Facility of LOBIMOD2 and associated Test A1-83. Data and information from various reports and drawings were referred in preparing the RDS. The results showed that by developing RDS, it has made possible to consolidate all relevant information in one single document. This is beneficial as it enables preservation of information, promotes quality assurance, allows traceability, facilitates continuous improvement, promotes solving of contradictions and finally assisting in developing thermal hydraulic input regardless of whichever code selected. However, some disadvantages were also recognized such as the need for experience in making engineering judgments, language barrier in accessing foreign information and limitation of resources. Some possible improvements are suggested to overcome these challenges. (author)

  18. Energy efficiency improvement by gear shifting optimization

    Directory of Open Access Journals (Sweden)

    Blagojevic Ivan A.

    2013-01-01

    Full Text Available Many studies have proved that elements of driver’s behavior related to gear selection have considerable influence on the fuel consumption. Optimal gear shifting is a complex task, especially for inexperienced drivers. This paper presents an implemented idea for gear shifting optimization with the aim of fuel consumption minimization with more efficient engine working regimes. Optimized gear shifting enables the best possible relation between vehicle motion regimes and engine working regimes. New theoretical-experimental approach has been developed using On-Board Diagnostic technology which so far has not been used for this purpose. The matrix of driving modes according to which tests were performed is obtained and special data acquisition system and analysis process have been developed. Functional relations between experimental test modes and adequate engine working parameters have been obtained and all necessary operations have been conducted to enable their use as inputs for the designed algorithm. The created Model has been tested in real exploitation conditions on passenger car with Otto fuel injection engine and On-Board Diagnostic connection without any changes on it. The conducted tests have shown that the presented Model has significantly positive effects on fuel consumption which is an important ecological aspect. Further development and testing of the Model allows implementation in wide range of motor vehicles with various types of internal combustion engines.

  19. Plant Friendly Input Design for Parameter Estimation in an Inertial System with Respect to D-Efficiency Constraints

    Directory of Open Access Journals (Sweden)

    Wiktor Jakowluk

    2014-11-01

    Full Text Available System identification, in practice, is carried out by perturbing processes or plants under operation. That is why in many industrial applications a plant-friendly input signal would be preferred for system identification. The goal of the study is to design the optimal input signal which is then employed in the identification experiment and to examine the relationships between the index of friendliness of this input signal and the accuracy of parameter estimation when the measured output signal is significantly affected by noise. In this case, the objective function was formulated through maximisation of the Fisher information matrix determinant (D-optimality expressed in conventional Bolza form. As setting such conditions of the identification experiment we can only talk about the D-suboptimality, we quantify the plant trajectories using the D-efficiency measure. An additional constraint, imposed on D-efficiency of the solution, should allow one to attain the most adequate information content  from the plant which operating point is perturbed in the least invasive (most friendly way. A simple numerical example, which clearly demonstrates the idea presented in the paper, is included and discussed.

  20. Conceptualizing, Understanding, and Predicting Responsible Decisions and Quality Input

    Science.gov (United States)

    Wall, N.; PytlikZillig, L. M.

    2012-12-01

    In areas such as climate change, where uncertainty is high, it is arguably less difficult to tell when efforts have resulted in changes in knowledge, than when those efforts have resulted in responsible decisions. What is a responsible decision? More broadly, when it comes to citizen input, what is "high quality" input? And most importantly, how are responsible decisions and quality input enhanced? The aim of this paper is to contribute to the understanding of the different dimensions of "responsible" or "quality" public input and citizen decisions by comparing and contrasting the different predictors of those different dimensions. We first present different possibilities for defining, operationalizing and assessing responsible or high quality decisions. For example, responsible decisions or quality input might be defined as using specific content (e.g., using climate change information in decisions appropriately), as using specific processes (e.g., investing time and effort in learning about and discussing the issues prior to making decisions), or on the basis of some judgment of the decision or input itself (e.g., judgments of the rationale provided for the decisions, or number of issues considered when giving input). Second, we present results from our work engaging people with science policy topics, and the different ways that we have tried to define these two constructs. In the area of climate change specifically, we describe the development of a short survey that assesses exposure to climate information, knowledge of and attitudes toward climate change, and use of climate information in one's decisions. Specifically, the short survey was developed based on a review of common surveys of climate change related knowledge, attitudes, and behaviors, and extensive piloting and cognitive interviews. Next, we analyze more than 200 responses to that survey (data collection is currently ongoing and will be complete after the AGU deadline), and report the predictors of

  1. Teleportation of squeezing: Optimization using non-Gaussian resources

    International Nuclear Information System (INIS)

    Dell'Anno, Fabio; De Siena, Silvio; Illuminati, Fabrizio; Adesso, Gerardo

    2010-01-01

    We study the continuous-variable quantum teleportation of states, statistical moments of observables, and scale parameters such as squeezing. We investigate the problem both in ideal and imperfect Vaidman-Braunstein-Kimble protocol setups. We show how the teleportation fidelity is maximized and the difference between output and input variances is minimized by using suitably optimized entangled resources. Specifically, we consider the teleportation of coherent squeezed states, exploiting squeezed Bell states as entangled resources. This class of non-Gaussian states, introduced by Illuminati and co-workers [F. Dell'Anno, S. De Siena, L. Albano, and F. Illuminati, Phys. Rev. A 76, 022301 (2007); F. Dell'Anno, S. De Siena, and F. Illuminati, ibid. 81, 012333 (2010)], includes photon-added and photon-subtracted squeezed states as special cases. At variance with the case of entangled Gaussian resources, the use of entangled non-Gaussian squeezed Bell resources allows one to choose different optimization procedures that lead to inequivalent results. Performing two independent optimization procedures, one can either maximize the state teleportation fidelity, or minimize the difference between input and output quadrature variances. The two different procedures are compared depending on the degrees of displacement and squeezing of the input states and on the working conditions in ideal and nonideal setups.

  2. Structural consequences of carbon taxes: An input-output analysis

    International Nuclear Information System (INIS)

    Che Yuhu.

    1992-01-01

    A model system is provided for examining for examining the structural consequences of carbon taxes on economic, energy, and environmental issues. The key component is the Iterative Multi-Optimization (IMO) Process model which describes, using an Input-Output (I-O) framework, the feedback between price changes and substitution. The IMO process is designed to assure this feedback process when the input coefficients in an I-O table can be changed while holding the I-O price model. The theoretical problems of convergence to a limit in the iterative process and uniqueness (which requires all IMO processes starting from different initial prices to converge to a unique point for a given level of carbon taxes) are addressed. The empirical analysis also examines the effects of carbon taxes on the US economy as described by a 78 sector I-O model. Findings are compared with those of other models that assess the effects of carbon taxes, and the similarities and differences with them are interpreted in terms of differences in the scope, sectoral detail, time frame, and policy assumptions among the models

  3. Contributions for larval development optimization of Homarus gammarus

    Directory of Open Access Journals (Sweden)

    Pedro Tiago Fonseca Sá

    2014-06-01

    The seawater rising temperature resulted in a decrease of intermoult period in all larval development stages and at all tested temperatures, ranging from 4.77 (Z1 to 16.5 days (Z3 at 16°C, whereas at 23°C, ranged from 3:02 (Z1 and 9.75 days (Z3. The results obtained are an extremely useful guide for future optimization of protocols on larval development of H. gammarus.

  4. Space mapping optimization algorithms for engineering design

    DEFF Research Database (Denmark)

    Koziel, Slawomir; Bandler, John W.; Madsen, Kaj

    2006-01-01

    A simple, efficient optimization algorithm based on space mapping (SM) is presented. It utilizes input SM to reduce the misalignment between the coarse and fine models of the optimized object over a region of interest, and output space mapping (OSM) to ensure matching of response and first...... to a benchmark problem. In comparison with SMIS, the models presented are simple and have a small number of parameters that need to be extracted. The new algorithm is applied to the optimization of coupled-line band-pass filter....

  5. Micro-Level Management of Agricultural Inputs: Emerging Approaches

    Directory of Open Access Journals (Sweden)

    Jonathan Weekley

    2012-12-01

    Full Text Available Through the development of superior plant varieties that benefit from high agrochemical inputs and irrigation, the agricultural Green Revolution has doubled crop yields, yet introduced unintended impacts on environment. An expected 50% growth in world population during the 21st century demands novel integration of advanced technologies and low-input production systems based on soil and plant biology, targeting precision delivery of inputs synchronized with growth stages of crop plants. Further, successful systems will integrate subsurface water, air and nutrient delivery, real-time soil parameter data and computer-based decision-making to mitigate plant stress and actively manipulate microbial rhizosphere communities that stimulate productivity. Such an approach will ensure food security and mitigate impacts of climate change.

  6. Thermodynamic analysis and optimization of an irreversible Ericsson cryogenic refrigerator cycle

    International Nuclear Information System (INIS)

    Ahmadi, Mohammad Hossein; Ahmadi, Mohammad Ali

    2015-01-01

    Highlights: • Thermodynamic modeling of Ericsson refrigeration is performed. • The latter is achieved using NSGA algorithm and thermodynamic analysis. • Different decision makers are utilized to determine optimum values of outcomes. - Abstract: Optimum ecological and thermal performance assessments of an Ericsson cryogenic refrigerator system are investigated in different optimization settings. To evaluate this goal, ecological and thermal approaches are proposed for the Ericsson cryogenic refrigerator, and three objective functions (input power, coefficient of performance and ecological objective function) are gained for the suggested system. Throughout the current research, an evolutionary algorithm (EA) and thermodynamic analysis are employed to specify optimum values of the input power, coefficient of performance and ecological objective function of an Ericsson cryogenic refrigerator system. Four setups are assessed for optimization of the Ericsson cryogenic refrigerator. Throughout the three scenarios, a conventional single-objective optimization has been utilized distinctly with each objective function, nonetheless of other objectives. Throughout the last setting, input power, coefficient of performance and ecological function objectives are optimized concurrently employing a non-dominated sorting genetic algorithm (GA) named the non-dominated sorting genetic algorithm (NSGA-II). As in multi-objective optimization, an assortment of optimum results named the Pareto optimum frontiers are gained rather than a single ultimate optimum result gained via conventional single-objective optimization. Thus, a process of decision making has been utilized for choosing an ultimate optimum result. Well-known decision-makers have been performed to specify optimized outcomes from the Pareto optimum results in the space of objectives. The outcomes gained from aforementioned optimization setups are discussed and compared employing an index of deviation presented in this

  7. Simple Design Tool for Development of Well Insulated Window Frames and Optimization of the Frame Geometry

    DEFF Research Database (Denmark)

    Zajas, Jan Jakub; Heiselberg, Per

    2012-01-01

    in order to approach an optimal solution. The program was also used to conduct an optimization process of the frame geometry. A large number of various window frame designs were created and evaluated, based on their insulation properties. The paper presents the investigation process and some of the best......This paper describes a design tool created with the purpose of designing highly insulated window frames. The design tool is based on a parametric model of the frame geometry, where various parameters describing the frame can be easily changed by the user. Based on this input, geometry of the frame...... is generated by the program and is used by the finite element simulator to calculate the thermal performance of the frame (the U value). After the initial design is evaluated, the user can quickly modify chosen parameters and generate a new design. This process can then be repeated in multiple iterations...

  8. Evaluation of Building Energy Saving Through the Development of Venetian Blinds' Optimal Control Algorithm According to the Orientation and Window-to-Wall Ratio

    Science.gov (United States)

    Kwon, Hyuk Ju; Yeon, Sang Hun; Lee, Keum Ho; Lee, Kwang Ho

    2018-02-01

    As various studies focusing on building energy saving have been continuously conducted, studies utilizing renewable energy sources, instead of fossil fuel, are needed. In particular, studies regarding solar energy are being carried out in the field of building science; in order to utilize such solar energy effectively, solar radiation being brought into the indoors should be acquired and blocked properly. Blinds are a typical solar radiation control device that is capable of controlling indoor thermal and light environments. However, slat-type blinds are manually controlled, giving a negative effect on building energy saving. In this regard, studies regarding the automatic control of slat-type blinds have been carried out for the last couple of decades. Therefore, this study aims to provide preliminary data for optimal control research through the controlling of slat angle in slat-type blinds by comprehensively considering various input variables. The window area ratio and orientation were selected as input variables. It was found that an optimal control algorithm was different among each window-to-wall ratio and window orientation. In addition, through comparing and analyzing the building energy saving performance for each condition by applying the developed algorithms to simulations, up to 20.7 % energy saving was shown in the cooling period and up to 12.3 % energy saving was shown in the heating period. In addition, building energy saving effect was greater as the window area ratio increased given the same orientation, and the effects of window-to-wall ratio in the cooling period were higher than those of window-to-wall ratio in the heating period.

  9. Optimization design of a 20-in. elliptical MCP-PMT

    International Nuclear Information System (INIS)

    Chen, Ping; Tian, Jinshou; Wei, Yonglin; Liu, Hulin; Sai, Xiaofeng; He, Jianping; Chen, Lin; Wang, Xing; Lu, Yu

    2017-01-01

    This paper describes the simulation work for optimizing the newly developed 20-in. elliptical MCP-PMT by enlarging the outside diameters of the two focusing electrodes and the open area of the glass bulb. Effects of biasing voltages applied to the two focusing electrodes and the MCP input facet are studied. With the new design of the 20 in. MCP-PMT, the transit time spread of the prototype can be less than 3 ns and the collection efficiency is as much as the present prototype.

  10. Optimizing noise control strategy in a forging workshop.

    Science.gov (United States)

    Razavi, Hamideh; Ramazanifar, Ehsan; Bagherzadeh, Jalal

    2014-01-01

    In this paper, a computer program based on a genetic algorithm is developed to find an economic solution for noise control in a forging workshop. Initially, input data, including characteristics of sound sources, human exposure, abatement techniques, and production plans are inserted into the model. Using sound pressure levels at working locations, the operators who are at higher risk are identified and picked out for the next step. The program is devised in MATLAB such that the parameters can be easily defined and changed for comparison. The final results are structured into 4 sections that specify an appropriate abatement method for each operator and machine, minimum allowance time for high-risk operators, required damping material for enclosures, and minimum total cost of these treatments. The validity of input data in addition to proper settings in the optimization model ensures the final solution is practical and economically reasonable.

  11. Input and execution

    International Nuclear Information System (INIS)

    Carr, S.; Lane, G.; Rowling, G.

    1986-11-01

    This document describes the input procedures, input data files and operating instructions for the SYVAC A/C 1.03 computer program. SYVAC A/C 1.03 simulates the groundwater mediated movement of radionuclides from underground facilities for the disposal of low and intermediate level wastes to the accessible environment, and provides an estimate of the subsequent radiological risk to man. (author)

  12. Competition and convergence between auditory and cross-modal visual inputs to primary auditory cortical areas

    Science.gov (United States)

    Mao, Yu-Ting; Hua, Tian-Miao

    2011-01-01

    Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into

  13. Trajectory of Sewerage System Development Optimization

    Science.gov (United States)

    Chupin, R. V.; Mayzel, I. V.; Chupin, V. R.

    2017-11-01

    The transition to market relations has determined a new technology for our country to manage the development of urban engineering systems. This technology has shifted to the municipal level and it can, in large, be presented in two stages. The first is the development of a scheme for the development of the water supply and sanitation system, the second is the implementation of this scheme on the basis of investment programs of utilities. In the investment programs, financial support is provided for the development and reconstruction of water disposal systems due to the investment component in the tariff, connection fees for newly commissioned capital construction projects and targeted financing for selected state and municipal programs, loans and credits. Financial provision with the development of sewerage systems becomes limited and the problem arises in their rational distribution between the construction of new water disposal facilities and the reconstruction of existing ones. The paper suggests a methodology for developing options for the development of sewerage systems, selecting the best of them by the life cycle cost criterion, taking into account the limited investments in their construction, models and methods of analysis, optimizing their reconstruction and development, taking into account reliability and seismic resistance.

  14. Effect of welding heat input on microstructures and toughness in simulated CGHAZ of V–N high strength steel

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Jun, E-mail: hujunral@163.com [The State Key Laboratory of Rolling and Automation, Northeastern University, Shenyang 110819 (China); Du, Lin-Xiu [The State Key Laboratory of Rolling and Automation, Northeastern University, Shenyang 110819 (China); Wang, Jian-Jun [Institute of Materials Research, School of Material and Metallurgy, Northeastern university, Shenyang 110819 (China); Gao, Cai-Ru [The State Key Laboratory of Rolling and Automation, Northeastern University, Shenyang 110819 (China)

    2013-08-10

    For the purpose of obtaining the appropriate heat input in the simulated weld CGHAZ of the hot-rolled V–N microalloyed high strength S-lean steel, the microstructural evolution, hardness, and toughness subjected to four different heat inputs were investigated. The results indicate that the hardness decreases with increase in the heat input, while the toughness first increases and then decreases. Moderate heat input is optimum, and the microstructure is fine polygonal ferrite, granular bainite, and acicular ferrite with dispersive nano-scale V(C,N) precipitates. The hardness is well-matched with that of the base metal. Moreover, the occurrence of energy dissipating micromechanisms (ductile dimples, tear ridges) contributes to the maximum total impact energy. The detrimental effect of the free N atoms on the toughness can be partly remedied by optimizing the microstructural type, fraction, morphologies, and crystallographic characteristics. The potency of V(C,N) precipitates on intragranular ferrite nucleation without MnS assistance under different heat inputs was discussed.

  15. Optimization and Validation of the Developed Uranium Isotopic Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J. H.; Kang, M. Y.; Kim, Jinhyeong; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of)

    2014-10-15

    γ-ray spectroscopy is a representative non-destructive assay for nuclear material, and less time-consuming and less expensive than the destructive analysis method. The destructive technique is more precise than NDA technique, however, there is some correction algorithm which can improve the performance of γ-spectroscopy. For this reason, an analysis code for uranium isotopic analysis is developed by Applied Nuclear Physics Group in Seoul National University. Overlapped γ- and x-ray peaks in the 89-101 keV X{sub α}-region are fitted with Gaussian and Lorentzian distribution peak functions, tail and background functions. In this study, optimizations for the full-energy peak efficiency calibration and fitting parameters of peak tail and background are performed, and validated with 24 hour acquisition of CRM uranium samples. The optimization of peak tail and background parameters are performed with the validation by using CRM uranium samples. The analysis performance is improved in HEU samples, but more optimization of fitting parameters is required in LEU sample analysis. In the future, the optimization research about the fitting parameters with various type of uranium samples will be performed. {sup 234}U isotopic analysis algorithms and correction algorithms (coincidence effect, self-attenuation effect) will be developed.

  16. The Development and Microstructure Analysis of High Strength Steel Plate NVE36 for Large Heat Input Welding

    Science.gov (United States)

    Peng, Zhang; Liangfa, Xie; Ming, Wei; Jianli, Li

    In the shipbuilding industry, the welding efficiency of the ship plate not only has a great effect on the construction cost of the ship, but also affects the construction speed and determines the delivery cycle. The steel plate used for large heat input welding was developed sufficiently. In this paper, the composition of the steel with a small amount of Nb, Ti and large amount of Mn had been designed in micro-alloyed route. The content of C and the carbon equivalent were also designed to a low level. The technology of oxide metallurgy was used during the smelting process of the steel. The rolling technology of TMCP was controlled at a low rolling temperature and ultra-fast cooling technology was used, for the purpose of controlling the transformation of the microstructure. The microstructure of the steel plate was controlled to be the mixed microstructure of low carbon bainite and ferrite. Large amount of oxide particles dispersed in the microstructure of steel, which had a positive effects on the mechanical property and welding performance of the steel. The mechanical property of the steel plate was excellent and the value of longitudinal Akv at -60 °C is more than 200 J. The toughness of WM and HAZ were excellent after the steel plate was welded with a large heat input of 100-250 kJ/cm. The steel plate processed by mentioned above can meet the requirement of large heat input welding.

  17. DETERMINATION OF THE OPTIMAL CAPITAL INVESTMENTS TO ENSURE THE SUSTAINABLE DEVELOPMENT OF THE RAILWAY

    Directory of Open Access Journals (Sweden)

    O. I. Kharchenko

    2015-04-01

    Full Text Available Purpose. Every year more attention is paid for the theoretical and practical issue of sustainable development of railway transport. But today the mechanisms of financial support of this development are poorly understood. Therefore, the aim of this article is to determine the optimal investment allocation to ensure sustainable development of the railway transport on the example of State Enterprise «Prydniprovsk Railway» and the creation of preconditions for the mathematical model development. Methodology. The ensuring task for sustainable development of railway transport is solved on the basis of the integral indicator of sustainable development effectiveness and defined as the maximization of this criterion. The optimization of measures technological and technical characters are proposed to carry out for increasing values of the integral performance measure components. To the optimization activities of technological nature that enhance the performance criteria belongs: optimization of the number of train and shunting locomotives, optimization of power handling mechanisms at the stations, optimization of routes of train flows. The activities related to the technical nature include: modernization of railways in the direction of their electrification and modernization of the running gear and coupler drawbars of rolling stock, as well as means of separators mechanization at stations to reduce noise impacts on the environment. Findings. The work resulted in the optimal allocation of investments to ensure the sustainable development of railway transportation of State Enterprise «Prydniprovsk Railway». This allows providing such kind of railway development when functioning of State Enterprise «Prydniprovsk Railway» is characterized by a maximum value of the integral indicator of efficiency. Originality. The work was reviewed and the new approach was proposed to determine the optimal allocation of capital investments to ensure sustainable

  18. iTOUGH2 Universal Optimization Using the PEST Protocol

    International Nuclear Information System (INIS)

    Finsterle, S.A.

    2010-01-01

    iTOUGH2 (http://www-esd.lbl.gov/iTOUGH2) is a computer program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis [Finsterle, 2007a, b, c]. iTOUGH2 contains a number of local and global minimization algorithms for automatic calibration of a model against measured data, or for the solution of other, more general optimization problems (see, for example, Finsterle [2005]). A detailed residual and estimation uncertainty analysis is conducted to assess the inversion results. Moreover, iTOUGH2 can be used to perform a formal sensitivity analysis, or to conduct Monte Carlo simulations for the examination for prediction uncertainties. iTOUGH2's capabilities are continually enhanced. As the name implies, iTOUGH2 is developed for use in conjunction with the TOUGH2 forward simulator for nonisothermal multiphase flow in porous and fractured media [Pruess, 1991]. However, iTOUGH2 provides FORTRAN interfaces for the estimation of user-specified parameters (see subroutine USERPAR) based on user-specified observations (see subroutine USEROBS). These user interfaces can be invoked to add new parameter or observation types to the standard set provided in iTOUGH2. They can also be linked to non-TOUGH2 models, i.e., iTOUGH2 can be used as a universal optimization code, similar to other model-independent, nonlinear parameter estimation packages such as PEST [Doherty, 2008] or UCODE [Poeter and Hill, 1998]. However, to make iTOUGH2's optimization capabilities available for use with an external code, the user is required to write some FORTRAN code that provides the link between the iTOUGH2 parameter vector and the input parameters of the external code, and between the output variables of the external code and the iTOUGH2 observation vector. While allowing for maximum flexibility, the coding requirement of this approach limits its applicability to those users with FORTRAN coding knowledge. To make iTOUGH2 capabilities accessible to many application models

  19. Uncertainty in BMP evaluation and optimization for watershed management

    Science.gov (United States)

    Chaubey, I.; Cibin, R.; Sudheer, K.; Her, Y.

    2012-12-01

    Use of computer simulation models have increased substantially to make watershed management decisions and to develop strategies for water quality improvements. These models are often used to evaluate potential benefits of various best management practices (BMPs) for reducing losses of pollutants from sources areas into receiving waterbodies. Similarly, use of simulation models in optimizing selection and placement of best management practices under single (maximization of crop production or minimization of pollutant transport) and multiple objective functions has increased recently. One of the limitations of the currently available assessment and optimization approaches is that the BMP strategies are considered deterministic. Uncertainties in input data (e.g. precipitation, streamflow, sediment, nutrient and pesticide losses measured, land use) and model parameters may result in considerable uncertainty in watershed response under various BMP options. We have developed and evaluated options to include uncertainty in BMP evaluation and optimization for watershed management. We have also applied these methods to evaluate uncertainty in ecosystem services from mixed land use watersheds. In this presentation, we will discuss methods to to quantify uncertainties in BMP assessment and optimization solutions due to uncertainties in model inputs and parameters. We have used a watershed model (Soil and Water Assessment Tool or SWAT) to simulate the hydrology and water quality in mixed land use watershed located in Midwest USA. The SWAT model was also used to represent various BMPs in the watershed needed to improve water quality. SWAT model parameters, land use change parameters, and climate change parameters were considered uncertain. It was observed that model parameters, land use and climate changes resulted in considerable uncertainties in BMP performance in reducing P, N, and sediment loads. In addition, climate change scenarios also affected uncertainties in SWAT

  20. SU-E-T-488: An Iso-Dose Curve Based Interactive IMRT Optimization System for Physician-Driven Plan Tuning

    International Nuclear Information System (INIS)

    Shi, F; Tian, Z; Jia, X; Jiang, S; Zarepisheh, M; Cervino, L

    2014-01-01

    Purpose: In treatment plan optimization for Intensity Modulated Radiation Therapy (IMRT), after a plan is initially developed by a dosimetrist, the attending physician evaluates its quality and often would like to improve it. As opposed to having the dosimetrist implement the improvements, it is desirable to have the physician directly and efficiently modify the plan for a more streamlined and effective workflow. In this project, we developed an interactive optimization system for physicians to conveniently and efficiently fine-tune iso-dose curves. Methods: An interactive interface is developed under C++/Qt. The physician first examines iso-dose lines. S/he then picks an iso-dose curve to be improved and drags it to a more desired configuration using a computer mouse or touchpad. Once the mouse is released, a voxel-based optimization engine is launched. The weighting factors corresponding to voxels between the iso-dose lines before and after the dragging are modified. The underlying algorithm then takes these factors as input to re-optimize the plan in near real-time on a GPU platform, yielding a new plan best matching the physician's desire. The re-optimized DVHs and iso-dose curves are then updated for the next iteration of modifications. This process is repeated until a physician satisfactory plan is achieved. Results: We have tested this system for a series of IMRT plans. Results indicate that our system provides the physicians an intuitive and efficient tool to edit the iso-dose curves according to their preference. The input information is used to guide plan re-optimization, which is achieved in near real-time using our GPU-based optimization engine. Typically, a satisfactory plan can be developed by a physician in a few minutes using this tool. Conclusion: With our system, physicians are able to manipulate iso-dose curves according to their preferences. Preliminary results demonstrate the feasibility and effectiveness of this tool

  1. Optimization and Optimal Control

    CERN Document Server

    Chinchuluun, Altannar; Enkhbat, Rentsen; Tseveendorj, Ider

    2010-01-01

    During the last four decades there has been a remarkable development in optimization and optimal control. Due to its wide variety of applications, many scientists and researchers have paid attention to fields of optimization and optimal control. A huge number of new theoretical, algorithmic, and computational results have been observed in the last few years. This book gives the latest advances, and due to the rapid development of these fields, there are no other recent publications on the same topics. Key features: Provides a collection of selected contributions giving a state-of-the-art accou

  2. A Holistic Concept to Design Optimal Water Supply Infrastructures for Informal Settlements Using Remote Sensing Data

    Directory of Open Access Journals (Sweden)

    Lea Rausch

    2018-02-01

    Full Text Available Ensuring access to water and sanitation for all is Goal No. 6 of the 17 UN Sustainability Development Goals to transform our world. As one step towards this goal, we present an approach that leverages remote sensing data to plan optimal water supply networks for informal urban settlements. The concept focuses on slums within large urban areas, which are often characterized by a lack of an appropriate water supply. We apply methods of mathematical optimization aiming to find a network describing the optimal supply infrastructure. Hereby, we choose between different decentral and central approaches combining supply by motorized vehicles with supply by pipe systems. For the purposes of illustration, we apply the approach to two small slum clusters in Dhaka and Dar es Salaam. We show our optimization results, which represent the lowest cost water supply systems possible. Additionally, we compare the optimal solutions of the two clusters (also for varying input parameters, such as population densities and slum size development over time and describe how the result of the optimization depends on the entered remote sensing data.

  3. Pilot study of a novel tool for input-free automated identification of transition zone prostate tumors using T2- and diffusion-weighted signal and textural features.

    Science.gov (United States)

    Stember, Joseph N; Deng, Fang-Ming; Taneja, Samir S; Rosenkrantz, Andrew B

    2014-08-01

    To present results of a pilot study to develop software that identifies regions suspicious for prostate transition zone (TZ) tumor, free of user input. Eight patients with TZ tumors were used to develop the model by training a Naïve Bayes classifier to detect tumors based on selection of most accurate predictors among various signal and textural features on T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC) maps. Features tested as inputs were: average signal, signal standard deviation, energy, contrast, correlation, homogeneity and entropy (all defined on T2WI); and average ADC. A forward selection scheme was used on the remaining 20% of training set supervoxels to identify important inputs. The trained model was tested on a different set of ten patients, half with TZ tumors. In training cases, the software tiled the TZ with 4 × 4-voxel "supervoxels," 80% of which were used to train the classifier. Each of 100 iterations selected T2WI energy and average ADC, which therefore were deemed the optimal model input. The two-feature model was applied blindly to the separate set of test patients, again without operator input of suspicious foci. The software correctly predicted presence or absence of TZ tumor in all test patients. Furthermore, locations of predicted tumors corresponded spatially with locations of biopsies that had confirmed their presence. Preliminary findings suggest that this tool has potential to accurately predict TZ tumor presence and location, without operator input. © 2013 Wiley Periodicals, Inc.

  4. Memory State Feedback RMPC for Multiple Time-Delayed Uncertain Linear Systems with Input Constraints

    Directory of Open Access Journals (Sweden)

    Wei-Wei Qin

    2014-01-01

    Full Text Available This paper focuses on the problem of asymptotic stabilization for a class of discrete-time multiple time-delayed uncertain linear systems with input constraints. Then, based on the predictive control principle of receding horizon optimization, a delayed state dependent quadratic function is considered for incorporating MPC problem formulation. By developing a memory state feedback controller, the information of the delayed plant states can be taken into full consideration. The MPC problem is formulated to minimize the upper bound of infinite horizon cost that satisfies the sufficient conditions. Then, based on the Lyapunov-Krasovskii function, a delay-dependent sufficient condition in terms of linear matrix inequality (LMI can be derived to design a robust MPC algorithm. Finally, the digital simulation results prove availability of the proposed method.

  5. Agricultural and Environmental Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-01-01

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN

  6. Extension of an Object-Oriented Optimization Tool: User's Reference Manual

    Science.gov (United States)

    Pak, Chan-Gi; Truong, Samson S.

    2015-01-01

    The National Aeronautics and Space Administration Armstrong Flight Research Center has developed a cost-effective and flexible object-oriented optimization (O (sup 3)) tool that leverages existing tools and practices and allows easy integration and adoption of new state-of-the-art software. This object-oriented framework can integrate the analysis codes for multiple disciplines, as opposed to relying on one code to perform analysis for all disciplines. Optimization can thus take place within each discipline module, or in a loop between the O (sup 3) tool and the discipline modules, or both. Six different sample mathematical problems are presented to demonstrate the performance of the O (sup 3) tool. Instructions for preparing input data for the O (sup 3) tool are detailed in this user's manual.

  7. Event-Triggered Distributed Approximate Optimal State and Output Control of Affine Nonlinear Interconnected Systems.

    Science.gov (United States)

    Narayanan, Vignesh; Jagannathan, Sarangapani

    2017-06-08

    This paper presents an approximate optimal distributed control scheme for a known interconnected system composed of input affine nonlinear subsystems using event-triggered state and output feedback via a novel hybrid learning scheme. First, the cost function for the overall system is redefined as the sum of cost functions of individual subsystems. A distributed optimal control policy for the interconnected system is developed using the optimal value function of each subsystem. To generate the optimal control policy, forward-in-time, neural networks are employed to reconstruct the unknown optimal value function at each subsystem online. In order to retain the advantages of event-triggered feedback for an adaptive optimal controller, a novel hybrid learning scheme is proposed to reduce the convergence time for the learning algorithm. The development is based on the observation that, in the event-triggered feedback, the sampling instants are dynamic and results in variable interevent time. To relax the requirement of entire state measurements, an extended nonlinear observer is designed at each subsystem to recover the system internal states from the measurable feedback. Using a Lyapunov-based analysis, it is demonstrated that the system states and the observer errors remain locally uniformly ultimately bounded and the control policy converges to a neighborhood of the optimal policy. Simulation results are presented to demonstrate the performance of the developed controller.

  8. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    Science.gov (United States)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  9. Optimization of an artificial neural network dedicated to the multivariate forecasting of daily global radiation

    International Nuclear Information System (INIS)

    Voyant, Cyril; Muselli, Marc; Paoli, Christophe; Nivet, Marie-Laure

    2011-01-01

    This paper presents an application of Artificial Neural Networks (ANNs) to predict daily solar radiation. We look at the Multi-Layer Perceptron (MLP) network which is the most used of ANNs architectures. In previous studies, we have developed an ad-hoc time series preprocessing and optimized a MLP with endogenous inputs in order to forecast the solar radiation on a horizontal surface. We propose in this paper to study the contribution of exogenous meteorological data (multivariate method) as time series to our optimized MLP and compare with different forecasting methods: a naive forecaster (persistence), ARIMA reference predictor, an ANN with preprocessing using only endogenous inputs (univariate method) and an ANN with preprocessing using endogenous and exogenous inputs. The use of exogenous data generates an nRMSE decrease between 0.5% and 1% for two stations during 2006 and 2007 (Corsica Island, France). The prediction results are also relevant for the concrete case of a tilted PV wall (1.175 kWp). The addition of endogenous and exogenous data allows a 1% decrease of the nRMSE over a 6 months-cloudy period for the power production. While the use of exogenous data shows an interest in winter, endogenous data as inputs on a preprocessed ANN seem sufficient in summer. -- Research highlights: → Use of exogenous data as ANN inputs to forecast horizontal daily global irradiation data. → New methodology allowing to choice the adequate exogenous data - a systematic method comparing endogenous and exogenous data. → Different referenced mathematical predictors allows to conclude about the pertinence of the proposed methodology.

  10. Developing a simulation framework for safe and optimal trajectories considering drivers’ driving style

    DEFF Research Database (Denmark)

    Gruber, Thierry; Larue, Grégoire S.; Rakotonirainy, Andry

    2017-01-01

    drivers with the optimal trajectory considering the motorist's driving style in real time. Travel duration and safety are the main parameters used to find the optimal trajectory. A simulation framework to determine the optimal trajectory was developed in which the ego car travels in a highway environment......Advanced driving assistance systems (ADAS) have huge potential for improving road safety and travel times. However, their take-up in the market is very slow; and these systems should consider driver's preferences to increase adoption rates. The aim of this study is to develop a model providing...

  11. Water resources and environmental input-output analysis and its key study issues: a review

    Science.gov (United States)

    YANG, Z.; Xu, X.

    2013-12-01

    Used to study the material and energy flow in socioeconomic system, Input-Output Analysis(IOA) had been an effective analysis tool since its appearance. The research fields of Input-Output Analysis were increasingly expanded and studied in depth with the development of fundamental theory. In this paper, starting with introduction of theory development, the water resources input-output analysis and environmental input-output analysis had been specifically reviewed, and two key study issues mentioned as well. Input-Occupancy-Output Analysis and Grey Input-Output Analysis whose proposal and development were introduced firstly could be regard as the effective complements of traditional IOA theory. Because of the hypotheses of homogeneity, stability and proportionality, Input-Occupancy-Output Analysis and Grey Input-Output Analysis always had been restricted in practical application inevitably. In the applied study aspect, with investigation of abundant literatures, research of water resources input-output analysis and environmental input-output analysis had been comprehensively reviewed and analyzed. The regional water resources flow between different economic sectors had been systematically analyzed and stated, and several types of environmental input-output analysis models combined with other effective analysis tools concluded. In two perspectives in terms of external and inland aspect, the development of water resources and environmental input-output analysis model had been explained, and several typical study cases in recent years listed respectively. By the aid of sufficient literature analysis, the internal development tendency and study hotspot had also been summarized. In recent years, Chinese literatures reporting water resources consumption analysis and virtue water study had occupied a large share. Water resources consumption analysis had always been the emphasis of inland water resources IOA. Virtue water study had been considered as the new hotspot of

  12. Plastic plate bending problem with friction on the boundary and uncertain input data

    Czech Academy of Sciences Publication Activity Database

    Hlaváček, Ivan

    2010-01-01

    Roč. 31, č. 4 (2010), s. 414-439 ISSN 0163-0563 R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional research plan: CEZ:AV0Z10190503 Keywords : anti- optimization * deformation theory of plasticity * Kačanov method * uncertain input data * worst scenario Subject RIV: BA - General Mathematics Impact factor: 0.687, year: 2010 http://www.tandfonline.com/doi/abs/10.1080/01630563.2010.483311

  13. Fuel economy and torque tracking in camless engines through optimization of neural networks

    International Nuclear Information System (INIS)

    Ashhab, Moh'd Sami S.

    2008-01-01

    The feed forward controller of a camless internal combustion engine is modeled by inverting a multi-input multi-output feed forward artificial neural network (ANN) model of the engine. The engine outputs, pumping loss and cylinder air charge, are related to the inputs, intake valve lift and closing timing, by the artificial neural network model, which is trained with historical input-output data. The controller selects the intake valve lift and closing timing that will mimimize the pumping loss and achieve engine torque tracking. Lower pumping loss means better fuel economy, whereas engine torque tracking gurantees the driver's torque demand. The inversion of the ANN is performed with the complex method constrained optimization. How the camless engine inverse controller can be augmented with adaptive techniques to maintain accuracy even when the engine parts degrade is discussed. The simulation results demonstrate the effectiveness of the developed camless engine controller

  14. Optimal switching using coherent control

    DEFF Research Database (Denmark)

    Kristensen, Philip Trøst; Heuck, Mikkel; Mørk, Jesper

    2013-01-01

    that the switching time, in general, is not limited by the cavity lifetime. Therefore, the total energy required for switching is a more relevant figure of merit than the switching speed, and for a particular two-pulse switching scheme we use calculus of variations to optimize the switching in terms of input energy....

  15. SSYST-3. Input description

    International Nuclear Information System (INIS)

    Meyder, R.

    1983-12-01

    The code system SSYST-3 is designed to analyse the thermal and mechanical behaviour of a fuel rod during a LOCA. The report contains a complete input-list for all modules and several tested inputs for a LOCA analysis. (orig.)

  16. Teleportation of squeezing: Optimization using non-Gaussian resources

    Science.gov (United States)

    Dell'Anno, Fabio; de Siena, Silvio; Adesso, Gerardo; Illuminati, Fabrizio

    2010-12-01

    We study the continuous-variable quantum teleportation of states, statistical moments of observables, and scale parameters such as squeezing. We investigate the problem both in ideal and imperfect Vaidman-Braunstein-Kimble protocol setups. We show how the teleportation fidelity is maximized and the difference between output and input variances is minimized by using suitably optimized entangled resources. Specifically, we consider the teleportation of coherent squeezed states, exploiting squeezed Bell states as entangled resources. This class of non-Gaussian states, introduced by Illuminati and co-workers [F. Dell’Anno, S. De Siena, L. Albano, and F. Illuminati, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.76.022301 76, 022301 (2007); F. Dell’Anno, S. De Siena, and F. Illuminati, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.81.012333 81, 012333 (2010)], includes photon-added and photon-subtracted squeezed states as special cases. At variance with the case of entangled Gaussian resources, the use of entangled non-Gaussian squeezed Bell resources allows one to choose different optimization procedures that lead to inequivalent results. Performing two independent optimization procedures, one can either maximize the state teleportation fidelity, or minimize the difference between input and output quadrature variances. The two different procedures are compared depending on the degrees of displacement and squeezing of the input states and on the working conditions in ideal and nonideal setups.

  17. Material input of nuclear fuel

    International Nuclear Information System (INIS)

    Rissanen, S.; Tarjanne, R.

    2001-01-01

    The Material Input (MI) of nuclear fuel, expressed in terms of the total amount of natural material needed for manufacturing a product, is examined. The suitability of the MI method for assessing the environmental impacts of fuels is also discussed. Material input is expressed as a Material Input Coefficient (MIC), equalling to the total mass of natural material divided by the mass of the completed product. The material input coefficient is, however, only an intermediate result, which should not be used as such for the comparison of different fuels, because the energy contents of nuclear fuel is about 100 000-fold compared to the energy contents of fossil fuels. As a final result, the material input is expressed in proportion to the amount of generated electricity, which is called MIPS (Material Input Per Service unit). Material input is a simplified and commensurable indicator for the use of natural material, but because it does not take into account the harmfulness of materials or the way how the residual material is processed, it does not alone express the amount of environmental impacts. The examination of the mere amount does not differentiate between for example coal, natural gas or waste rock containing usually just sand. Natural gas is, however, substantially more harmful for the ecosystem than sand. Therefore, other methods should also be used to consider the environmental load of a product. The material input coefficient of nuclear fuel is calculated using data from different types of mines. The calculations are made among other things by using the data of an open pit mine (Key Lake, Canada), an underground mine (McArthur River, Canada) and a by-product mine (Olympic Dam, Australia). Furthermore, the coefficient is calculated for nuclear fuel corresponding to the nuclear fuel supply of Teollisuuden Voima (TVO) company in 2001. Because there is some uncertainty in the initial data, the inaccuracy of the final results can be even 20-50 per cent. The value

  18. Adaptive hybrid optimal quantum control for imprecisely characterized systems.

    Science.gov (United States)

    Egger, D J; Wilhelm, F K

    2014-06-20

    Optimal quantum control theory carries a huge promise for quantum technology. Its experimental application, however, is often hindered by imprecise knowledge of the input variables, the quantum system's parameters. We show how to overcome this by adaptive hybrid optimal control, using a protocol named Ad-HOC. This protocol combines open- and closed-loop optimal control by first performing a gradient search towards a near-optimal control pulse and then an experimental fidelity estimation with a gradient-free method. For typical settings in solid-state quantum information processing, adaptive hybrid optimal control enhances gate fidelities by an order of magnitude, making optimal control theory applicable and useful.

  19. Characterization of Input Current Interharmonics in Adjustable Speed Drives

    DEFF Research Database (Denmark)

    Soltani, Hamid; Davari, Pooya; Zare, Firuz

    2017-01-01

    This paper investigates the interharmonic generation process in the input current of double-stage Adjustable Speed Drives (ASDs) based on voltage source inverters and front-end diode rectifiers. The effects of the inverter output-side low order harmonics, caused by implementing the double......-edge symmetrical regularly sampled Space Vector Modulation (SVM) technique, on the input current interharmonic components are presented and discussed. Particular attention is also given to the influence of the asymmetrical regularly sampled modulation technique on the drive input current interharmonics....... The developed theoretical analysis predicts the drive interharmonic frequency locations with respect to the selected sampling strategies. Simulation and experimental results on a 2.5 kW ASD system verify the effectiveness of the theoretical analysis....

  20. Optimal policy of energy innovation in developing countries: Development of solar PV in Iran

    International Nuclear Information System (INIS)

    Shafiei, Ehsan; Saboohi, Yadollah; Ghofrani, Mohammad B.

    2009-01-01

    The purpose of this study is to apply managerial economics and methods of decision analysis to study the optimal pattern of innovation activities for development of new energy technologies in developing countries. For this purpose, a model of energy research and development (R and D) planning is developed and it is then linked to a bottom-up energy-systems model. The set of interlinked models provide a comprehensive analytical tool for assessment of energy technologies and innovation planning taking into account the specific conditions of developing countries. An energy-system model is used as a tool for the assessment and prioritization of new energy technologies. Based on the results of the technology assessment model, the optimal R and D resources allocation for new energy technologies is estimated with the help of the R and D planning model. The R and D planning model is based on maximization of the total net present value of resulting R and D benefits taking into account the dynamics of technological progress, knowledge and experience spillovers from advanced economies, technology adoption and R and D constraints. Application of the set of interlinked models is explained through the analysis of the development of solar PV in Iranian electricity supply system and then some important policy insights are concluded

  1. Optimal time-domain technique for pulse width modulation in power electronics

    Directory of Open Access Journals (Sweden)

    I. Mayergoyz

    2018-05-01

    Full Text Available Optimal time-domain technique for pulse width modulation is presented. It is based on exact and explicit analytical solutions for inverter circuits, obtained for any sequence of input voltage rectangular pulses. Two optimal criteria are discussed and illustrated by numerical examples.

  2. Stochastic analysis and robust optimization for a deck lid inner panel stamping

    International Nuclear Information System (INIS)

    Hou, Bo; Wang, Wurong; Li, Shuhui; Lin, Zhongqin; Xia, Z. Cedric

    2010-01-01

    FE-simulation and optimization are widely used in the stamping process to improve design quality and shorten development cycle. However, the current simulation and optimization may lead to non-robust results due to not considering the variation of material and process parameters. In this study, a novel stochastic analysis and robust optimization approach is proposed to improve the stamping robustness, where the uncertainties are involved to reflect manufacturing reality. A meta-model based stochastic analysis method is developed, where FE-simulation, uniform design and response surface methodology (RSM) are used to construct meta-model, based on which Monte-Carlo simulation is performed to predict the influence of input parameters variation on the final product quality. By applying the stochastic analysis, uniform design and RSM, the mean and the standard deviation (SD) of product quality are calculated as functions of the controllable process parameters. The robust optimization model composed of mean and SD is constructed and solved, the result of which is compared with the deterministic one to show its advantages. It is demonstrated that the product quality variations are reduced significantly, and quality targets (reject rate) are achieved under the robust optimal solution. The developed approach offers rapid and reliable results for engineers to deal with potential stamping problems during the early phase of product and tooling design, saving more time and resources.

  3. Development and Application of a Tool for Optimizing Composite Matrix Viscoplastic Material Parameters

    Science.gov (United States)

    Murthy, Pappu L. N.; Naghipour Ghezeljeh, Paria; Bednarcyk, Brett A.

    2018-01-01

    This document describes a recently developed analysis tool that enhances the resident capabilities of the Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) and its application. MAC/GMC is a composite material and laminate analysis software package developed at NASA Glenn Research Center. The primary focus of the current effort is to provide a graphical user interface (GUI) capability that helps users optimize highly nonlinear viscoplastic constitutive law parameters by fitting experimentally observed/measured stress-strain responses under various thermo-mechanical conditions for braided composites. The tool has been developed utilizing the MATrix LABoratory (MATLAB) (The Mathworks, Inc., Natick, MA) programming language. Illustrative examples shown are for a specific braided composite system wherein the matrix viscoplastic behavior is represented by a constitutive law described by seven parameters. The tool is general enough to fit any number of experimentally observed stress-strain responses of the material. The number of parameters to be optimized, as well as the importance given to each stress-strain response, are user choice. Three different optimization algorithms are included: (1) Optimization based on gradient method, (2) Genetic algorithm (GA) based optimization and (3) Particle Swarm Optimization (PSO). The user can mix and match the three algorithms. For example, one can start optimization with either 2 or 3 and then use the optimized solution to further fine tune with approach 1. The secondary focus of this paper is to demonstrate the application of this tool to optimize/calibrate parameters for a nonlinear viscoplastic matrix to predict stress-strain curves (for constituent and composite levels) at different rates, temperatures and/or loading conditions utilizing the Generalized Method of Cells. After preliminary validation of the tool through comparison with experimental results, a detailed virtual parametric study is

  4. Chemical sensors are hybrid-input memristors

    Science.gov (United States)

    Sysoev, V. I.; Arkhipov, V. E.; Okotrub, A. V.; Pershin, Y. V.

    2018-04-01

    Memristors are two-terminal electronic devices whose resistance depends on the history of input signal (voltage or current). Here we demonstrate that the chemical gas sensors can be considered as memristors with a generalized (hybrid) input, namely, with the input consisting of the voltage, analyte concentrations and applied temperature. The concept of hybrid-input memristors is demonstrated experimentally using a single-walled carbon nanotubes chemical sensor. It is shown that with respect to the hybrid input, the sensor exhibits some features common with memristors such as the hysteretic input-output characteristics. This different perspective on chemical gas sensors may open new possibilities for smart sensor applications.

  5. Optimization of an RF driven H- ion source

    International Nuclear Information System (INIS)

    Leung, K.N.; DiVergilio, W.F.; Hauck, C.A.; Kunkel, W.B.; McDonald, D.S.

    1991-04-01

    A radio-frequency driven multicusp source has recently been developed to generate volume-produced H - ion beams with extracted current density higher than 200 mA/cm 2 . We have improved the output power of the rf generator and the insulation coating of the antenna coil. We have also optimized the antenna positions and geometry and the filter magnetic field for high power pulsed operation. A total H - current of 30 mA can be obtained with a 5.4-mm-diam extraction aperture and with an rf input power of 50 kW. 4 refs., 5 figs

  6. Development of Inventory Optimization System for Operation Nuclear Plants

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Se-Jin; Park, Jong-Hyuk; Yoo, Sung-Soo; Lee, Sang-Guk [Korea Electric Power Research Institutes, Taejon (Korea, Republic of)

    2006-07-01

    Inventory control of spare parts plays an increasingly important role in operation management. This is why inventory management systems such as manufacturing resources planning(MRP) and enterprise resource planning(ERP) have been added. However, most of these contributions have similar theoretical background. This means the concepts and techniques are mainly based on mathematical assumptions and modeling inventory of spare parts situations. Nuclear utilities in Korea have several problems to manage the optimum level of spare parts though they used MRP System. Because most of items have long lead time and they are imported from United States, Canada, France and so on. We developed the inventory optimization system for Operation Nuclear Plants to resolve these problems. In this paper, we report a data flow process, data load and inventory calculation process. The main contribution of this paper is development of inventory optimization system which can be used in domestic power plants.

  7. Development of Inventory Optimization System for Operation Nuclear Plants

    International Nuclear Information System (INIS)

    Jang, Se-Jin; Park, Jong-Hyuk; Yoo, Sung-Soo; Lee, Sang-Guk

    2006-01-01

    Inventory control of spare parts plays an increasingly important role in operation management. This is why inventory management systems such as manufacturing resources planning(MRP) and enterprise resource planning(ERP) have been added. However, most of these contributions have similar theoretical background. This means the concepts and techniques are mainly based on mathematical assumptions and modeling inventory of spare parts situations. Nuclear utilities in Korea have several problems to manage the optimum level of spare parts though they used MRP System. Because most of items have long lead time and they are imported from United States, Canada, France and so on. We developed the inventory optimization system for Operation Nuclear Plants to resolve these problems. In this paper, we report a data flow process, data load and inventory calculation process. The main contribution of this paper is development of inventory optimization system which can be used in domestic power plants

  8. Optimization and Simulation in Drug Development - Review and Analysis

    DEFF Research Database (Denmark)

    Schjødt-Eriksen, Jens; Clausen, Jens

    2003-01-01

    We give a review of pharmaceutical R&D and mathematical simulation and optimization methods used to support decision making within the pharmaceutical development process. The complex nature of drug development is pointed out through a description of the various phases of the pharmaceutical develo...... development process. A part of the paper is dedicated to the use of simulation techniques to support clinical trials. The paper ends with a section describing portfolio modelling methods in the context of the pharmaceutical industry....

  9. Topology and boundary shape optimization as an integrated design tool

    Science.gov (United States)

    Bendsoe, Martin Philip; Rodrigues, Helder Carrico

    1990-01-01

    The optimal topology of a two dimensional linear elastic body can be computed by regarding the body as a domain of the plane with a high density of material. Such an optimal topology can then be used as the basis for a shape optimization method that computes the optimal form of the boundary curves of the body. This results in an efficient and reliable design tool, which can be implemented via common FEM mesh generator and CAD type input-output facilities.

  10. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573])

  11. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis

  12. K Basins environmental impact statement technical input document

    International Nuclear Information System (INIS)

    Bergsman, K.H.; Bergmann, D.W.; Costley, G.E.; Jansky, M.T.; McCormack, R.L.; Monthey, M.J.; Praga, A.N.; Ullah, J.K.; Willis, W.L.

    1995-10-01

    This document describes the technical input necessary to develop and evaluate the alternatives within the Environmental Impact Statement for the Management of Spent Nuclear Fuel From the K Basins at the Hanford Site

  13. Optimization of Simulated Inventory Systems : OptQuest and Alternatives

    OpenAIRE

    Kleijnen, J.P.C.; Wan, J.

    2006-01-01

    This article illustrates simulation optimization through an (s, S) inventory manage- ment system. In this system, the goal function to be minimized is the expected value of speci…c inventory costs. Moreover, speci…c constraints must be satis…ed for some random simulation responses, namely the service or …ll rate, and for some determin- istic simulation inputs, namely the constraint s optimization methods, including the popular OptQuest method. The optimal...

  14. Development of High Heat Input Welding Offshore Steel as Normalized Condition

    Science.gov (United States)

    Deng, Wei; Qin, Xiaomei

    The heavy plate used for offshore structure is one of the important strategic products. In recent years, there is an increasing demand for heavy shipbuilding steel plate with excellent weldability in high heat input welding. During the thermal cycle, the microstructure of the heat affected zone (HAZ) of plates was damaged, and this markedly reduced toughness of HAZ. So, how to improve the toughness of HAZ has been a key subject in the fields of steel research. Oxide metallurgy is considered as an effective way to improve toughness of HAZ, because it could be used to retard grain growth by fine particles, which are stable at the high temperature.The high strength steel plate, which satisfies the low temperature specification, has been applied to offshore structure. Excellent properties of the plates and welded joints were obtained by oxide metallurgy technology, latest controlled rolling and accelerated cooling technology using Ultra-Fast Cooling (an on-line accelerated cooling system). The 355MPa-grade high strength steel plates with normalizing condition were obtained, and the steels have excellent weldability with heat input energy of 79 287kJ/cm, and the nil ductility transition (NDT) temperature was -70°C, which can satisfy the construction of offshore structure in cold regions.

  15. Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.

    Science.gov (United States)

    Kamesh, Reddi; Rani, K Yamuna

    2016-09-01

    A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Methodology for wind turbine blade geometry optimization

    Energy Technology Data Exchange (ETDEWEB)

    Perfiliev, D.

    2013-11-01

    Nowadays, the upwind three bladed horizontal axis wind turbine is the leading player on the market. It has been found to be the best industrial compromise in the range of different turbine constructions. The current wind industry innovation is conducted in the development of individual turbine components. The blade constitutes 20-25% of the overall turbine budget. Its optimal operation in particular local economic and wind conditions is worth investigating. The blade geometry, namely the chord, twist and airfoil type distributions along the span, responds to the output measures of the blade performance. Therefore, the optimal wind blade geometry can improve the overall turbine performance. The objectives of the dissertation are focused on the development of a methodology and specific tool for the investigation of possible existing wind blade geometry adjustments. The novelty of the methodology presented in the thesis is the multiobjective perspective on wind blade geometry optimization, particularly taking simultaneously into account the local wind conditions and the issue of aerodynamic noise emissions. The presented optimization objective approach has not been investigated previously for the implementation in wind blade design. The possibilities to use different theories for the analysis and search procedures are investigated and sufficient arguments derived for the usage of proposed theories. The tool is used for the test optimization of a particular wind turbine blade. The sensitivity analysis shows the dependence of the outputs on the provided inputs, as well as its relative and absolute divergences and instabilities. The pros and cons of the proposed technique are seen from the practical implementation, which is documented in the results, analysis and conclusion sections. (orig.)

  17. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-06-20

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN.

  18. Ring rolling process simulation for geometry optimization

    Science.gov (United States)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.

  19. Investigating gaze-controlled input in a cognitive selection test

    OpenAIRE

    Gayraud, Katja; Hasse, Catrin; Eißfeldt, Hinnerk; Pannasch, Sebastian

    2017-01-01

    In the field of aviation, there is a growing interest in developing more natural forms of interaction between operators and systems to enhance safety and efficiency. These efforts also include eye gaze as an input channel for human-machine interaction. The present study investigates the application of gaze-controlled input in a cognitive selection test called Eye Movement Conflict Detection Test. The test enables eye movements to be studied as an indicator for psychological test performance a...

  20. Some New Locally Optimal Control Laws for Sailcraft Dynamics in Heliocentric Orbits

    Directory of Open Access Journals (Sweden)

    F. A. Abd El-Salam

    2013-01-01

    Full Text Available The concept of solar sailing and its developing spacecraft is presented. The gravitational and solar radiation forces are considered. The effect of source of radiation pressure and the force due to coronal mass ejections and solar wind on the sailcraft configurations is modeled. Some analytical control laws with some mentioned input constraints for optimizing sailcraft dynamics in heliocentric orbit using lagrange’s planetary equations are obtained. Optimum force vector in a required direction is maximized by deriving optimal sail cone angle. Ignoring the absorbed and diffusely reflected parts of the radiation, some special cases are obtained. New control laws that maximize thrust to obtain certain required maximization in some particular orbital element are obtained.

  1. Optimization of surface roughness parameters in dry turning

    OpenAIRE

    R.A. Mahdavinejad; H. Sharifi Bidgoli

    2009-01-01

    Purpose: The precision of machine tools on one hand and the input setup parameters on the other hand, are strongly influenced in main output machining parameters such as stock removal, toll wear ratio and surface roughnes.Design/methodology/approach: There are a lot of input parameters which are effective in the variations of these output parameters. In CNC machines, the optimization of machining process in order to predict surface roughness is very important.Findings: From this point of view...

  2. Passive Optimization Design Based on Particle Swarm Optimization in Rural Buildings of the Hot Summer and Warm Winter Zone of China

    Directory of Open Access Journals (Sweden)

    Shilei Lu

    2017-12-01

    Full Text Available The development of green building is an important way to solve the environmental problems of China’s construction industry. Energy conservation and energy utilization are important for the green building evaluation criteria (GBEC. The objective of this study is to evaluate the quantitative relationship between building shape parameter, envelope parameters, shading system, courtyard and the energy consumption (EC as well as the impact on indoor thermal comfort of rural residential buildings in the hot summer and warm winter zone (HWWZ. Taking Quanzhou (Fujian Province of China as an example, based on the field investigation, EnergyPlus is used to build the building performance model. In addition, the classical particle swarm optimization algorithm in GenOpt software is used to optimize the various factors affecting the EC. Single-objective optimization has provided guidance to the multi-dimensional optimization and regression analysis is used to find the effects of a single input variable on an output variable. Results shows that the energy saving rate of an optimized rural residence is about 26–30% corresponding to the existing rural residence. Moreover, the payback period is about 20 years. A simple case study is used to demonstrate the accuracy of the proposed optimization analysis. The optimization can be used to guide the design of new rural construction in the area and the energy saving transformation of the existing rural houses, which can help to achieve the purpose of energy saving and comfort.

  3. Development of a standard methodology for optimizing remote visual display for nuclear-maintenance tasks

    International Nuclear Information System (INIS)

    Clarke, M.M.; Garin, J.; Preston-Anderson, A.

    1981-01-01

    The aim of the present study is to develop a methodology for optimizing remote viewing systems for a fuel recycle facility (HEF) being designed at Oak Ridge National Laboratory (ORNL). An important feature of this design involves the Remotex concept: advanced servo-controlled master/slave manipulators, with remote television viewing, will totally replace direct human contact with the radioactive environment. Therefore, the design of optimal viewing conditions is a critical component of the overall man/machine system. A methodology has been developed for optimizing remote visual displays for nuclear maintenance tasks. The usefulness of this approach has been demonstrated by preliminary specification of optimal closed circuit TV systems for such tasks

  4. DEVELOPING A SMARTPHONE APPLICATION TO IMPROVE CARE AND OUTCOMES IN ADOLESCENT ARTHRITIS THROUGH PATIENT INPUT

    Directory of Open Access Journals (Sweden)

    Alice Ran Cai

    2015-09-01

    identify five major themes that informed the development of the app. First was Monitoring Information, which included JIA-related symptoms, mood and stress, exercise, missed medications, medication side-effects, and completing the health assessment questionnaire. Second was Setting Reminders for medications and appointments. Third theme was Education and Support, such as including practical advice and links to social support groups. Fourth theme pertains to Motivating Factors for Using the App, such as providing feedback for personal input and having a rewards system. The last theme relates to the Design for the App, such as how to make it visually appealing and easy to navigate. Qualitative feedbacks from CYP and HCPs during phase 2 indicated that the app is acceptable, comprehensive, interesting, and useful. Conclusions: The current study employed a qualitative user-centered approach to develop an acceptable and developmentally appropriate smartphone application that can benefit YP with JIA. Current qualitative data showed that complementing traditional therapies with new mobile technology may be an affordable and effective method to improve patient’s understanding of their condition and help YP become more independent in their own healthcare. Collecting more frequent and accurate data using the app may also improve treatments and interactions between patients and HCPs, which optimizes health and wellbeing.

  5. A tool for efficient, model-independent management optimization under uncertainty

    Science.gov (United States)

    White, Jeremy; Fienen, Michael N.; Barlow, Paul M.; Welter, Dave E.

    2018-01-01

    To fill a need for risk-based environmental management optimization, we have developed PESTPP-OPT, a model-independent tool for resource management optimization under uncertainty. PESTPP-OPT solves a sequential linear programming (SLP) problem and also implements (optional) efficient, “on-the-fly” (without user intervention) first-order, second-moment (FOSM) uncertainty techniques to estimate model-derived constraint uncertainty. Combined with a user-specified risk value, the constraint uncertainty estimates are used to form chance-constraints for the SLP solution process, so that any optimal solution includes contributions from model input and observation uncertainty. In this way, a “single answer” that includes uncertainty is yielded from the modeling analysis. PESTPP-OPT uses the familiar PEST/PEST++ model interface protocols, which makes it widely applicable to many modeling analyses. The use of PESTPP-OPT is demonstrated with a synthetic, integrated surface-water/groundwater model. The function and implications of chance constraints for this synthetic model are discussed.

  6. Optimization of a single stage inverter with one cycle control for photovoltaic power generation

    Energy Technology Data Exchange (ETDEWEB)

    Egiziano, L.; Femia, N.; Granozio, D.; Petrone, G.; Spagnuolo, G. [Salermo Univ., Salermo (Italy); Vitelli, M. [Seconda Univ. di Napoli, Napoli (Italy)

    2006-07-01

    An optimized one-cycle control (OCC) for maximum power point tracking and power factor correction in grid-connected photovoltaic (PV) applications was described. OCC is a nonlinear control technique that rejects line perturbations and allows both output power factor co-reaction and tracking of input PV fields. An OCC system was analyzed in order to select optimal design parameters. Parameters were refined through the selection of suitable design constraints. A stochastic search was then performed. Criteria were then developed to distinguish appropriate design parameters for the optimized OCC. The optimization was based on advanced heuristic techniques for non-linear constrained optimization. Performance indices were calculated for each feasible set of parameters. A customized perturb and observe control was then applied to the single-stage inverter. Results of the optimization process were validated by a series of time-domain simulations conducted under heavy, varying irradiance conditions. Results of the simulations showed that the optimized controllers showed improved performance in terms of power drawn from the PV field. 7 refs., 1 tab., 5 figs.

  7. Enhanced Input in LCTL Pedagogy

    Directory of Open Access Journals (Sweden)

    Marilyn S. Manley

    2009-08-01

    Full Text Available Language materials for the more-commonly-taught languages (MCTLs often include visual input enhancement (Sharwood Smith 1991, 1993 which makes use of typographical cues like bolding and underlining to enhance the saliency of targeted forms. For a variety of reasons, this paper argues that the use of enhanced input, both visual and oral, is especially important as a tool for the lesscommonly-taught languages (LCTLs. As there continues to be a scarcity of teaching resources for the LCTLs, individual teachers must take it upon themselves to incorporate enhanced input into their own self-made materials. Specific examples of how to incorporate both visual and oral enhanced input into language teaching are drawn from the author’s own experiences teaching Cuzco Quechua. Additionally, survey results are presented from the author’s Fall 2010 semester Cuzco Quechua language students, supporting the use of both visual and oral enhanced input.

  8. Enhanced Input in LCTL Pedagogy

    Directory of Open Access Journals (Sweden)

    Marilyn S. Manley

    2010-08-01

    Full Text Available Language materials for the more-commonly-taught languages (MCTLs often include visual input enhancement (Sharwood Smith 1991, 1993 which makes use of typographical cues like bolding and underlining to enhance the saliency of targeted forms. For a variety of reasons, this paper argues that the use of enhanced input, both visual and oral, is especially important as a tool for the lesscommonly-taught languages (LCTLs. As there continues to be a scarcity of teaching resources for the LCTLs, individual teachers must take it upon themselves to incorporate enhanced input into their own self-made materials. Specific examples of how to incorporate both visual and oral enhanced input into language teaching are drawn from the author’s own experiences teaching Cuzco Quechua. Additionally, survey results are presented from the author’s Fall 2010 semester Cuzco Quechua language students, supporting the use of both visual and oral enhanced input.

  9. Continuous-variable quantum cloning of coherent states with phase-conjugate input modes using linear optics

    International Nuclear Information System (INIS)

    Chen, Haixia; Zhang, Jing

    2007-01-01

    We propose a scheme for continuous-variable quantum cloning of coherent states with phase-conjugate input modes using linear optics. The quantum cloning machine yields M identical optimal clones from N replicas of a coherent state and N replicas of its phase conjugate. This scheme can be straightforwardly implemented with the setups accessible at present since its optical implementation only employs simple linear optical elements and homodyne detection. Compared with the original scheme for continuous-variable quantum cloning with phase-conjugate input modes proposed by Cerf and Iblisdir [Phys. Rev. Lett. 87, 247903 (2001)], which utilized a nondegenerate optical parametric amplifier, our scheme loses the output of phase-conjugate clones and is regarded as irreversible quantum cloning

  10. Aspects of input processing in the numerical control of electron beam machines

    International Nuclear Information System (INIS)

    Chowdhury, A.K.

    1981-01-01

    A high-performance Numerical Control has been developed for an Electron Beam Machine. The system is structured into 3 hierarchial levels: Input Processing, Realtime Processing (such as Geometry Interpolation) and the Interfaces to the Electron Beam Machine. The author considers the Input Processing. In conventional Numerical Controls the Interfaces to the control is given by the control language as defined in DIN 66025. State of the art in NC-technology offers programming systems of differing competence covering the spectra between manual programming in the control language to highly sophisticated systems such as APT. This software interface has been used to define an Input Processor that in cooperation with the Hostcomputer meets the requirements of a sophisticated NC-system but at the same time provides a modest stand-alone system with all the basic functions such as interactive program-editing, program storage, program execution simultaneous with the development of another program, etc. Software aspects such as adapting DIN 66025 for Electron Beam Machining, organisation and modularisation of Input Processor Software has been considered and solutions have been proposed. Hardware aspects considered are interconnections of the Input Processor with the Host and the Realtime Processors. Because of economical and development-time considerations, available software and hardware has been liberally used and own development has been kept to a minimum. The proposed system is modular in software and hardware and therefore very flexible and open-ended to future expansion. (Auth.)

  11. DEVELOPMENT OF THE ALGORITHM FOR CHOOSING THE OPTIMAL SCENARIO FOR THE DEVELOPMENT OF THE REGION'S ECONOMY

    Directory of Open Access Journals (Sweden)

    I. S. Borisova

    2018-01-01

    Full Text Available Purpose: the article deals with the development of an algorithm for choosing the optimal scenario for the development of the regional economy. Since the "Strategy for socio-economic development of the Lipetsk region for the period until 2020" does not contain scenarios for the development of the region, the algorithm for choosing the optimal scenario for the development of the regional economy is formalized. The scenarios for the development of the economy of the Lipetsk region according to the indicators of the Program of social and economic development are calculated: "Quality of life index", "Average monthly nominal wage", "Level of registered unemployment", "Growth rate of gross regional product", "The share of innovative products in the total volume of goods shipped, works performed and services rendered by industrial organizations", "Total volume of atmospheric pollution per unit GRP" and "Satisfaction of the population with the activity of executive bodies of state power of the region". Based on the calculation of development scenarios, the dynamics of the values of these indicators was developed in the implementation of scenarios for the development of the economy of the Lipetsk region in 2016–2020. Discounted financial costs of economic participants for realization of scenarios of development of economy of the Lipetsk region are estimated. It is shown that the current situation in the economy of the Russian Federation assumes the choice of a paradigm for the innovative development of territories and requires all participants in economic relations at the regional level to concentrate their resources on the creation of new science-intensive products. An assessment of the effects of the implementation of reasonable scenarios for the development of the economy of the Lipetsk region was carried out. It is shown that the most acceptable is the "base" scenario, which assumes a consistent change in the main indicators. The specific economic

  12. The Impact of Input Quality on Early Sign Development in Native and Non-Native Language Learners

    Science.gov (United States)

    Lu, Jenny; Jones, Anna; Morgan, Gary

    2016-01-01

    There is debate about how input variation influences child language. Most deaf children are exposed to a sign language from their non-fluent hearing parents and experience a delay in exposure to accessible language. A small number of children receive language input from their deaf parents who are fluent signers. Thus it is possible to document the…

  13. Topology optimization of radio frequency and microwave structures

    DEFF Research Database (Denmark)

    Aage, Niels

    in this thesis, concerns the optimization of devices for wireless energy transfer via strongly coupled magnetic resonators. A single design problem is considered to demonstrate proof of concept. The resulting design illustrates the possibilities of the optimization method, but also reveals its numerical...... of efficient antennas and power supplies. A topology optimization methodology is proposed based on a design parameterization which incorporates the skin effect. The numerical optimization procedure is implemented in Matlab, for 2D problems, and in a parallel C++ optimization framework, for 3D design problems...... formalism, a two step optimization procedure is presented. This scheme is applied to the design and optimization of a hemispherical sub-wavelength antenna. The optimized antenna configuration displayed a ratio of radiated power to input power in excess of 99 %. The third, and last, design problem considered...

  14. On Babies and Bathwater: Input in Foreign Language Learning.

    Science.gov (United States)

    VanPatten, Bill

    1987-01-01

    A discussion of Krashen's monitor theory and its applications to foreign language teaching includes consideration of the very important role input plays in language development and examination of the relationship between the development of grammatical competence and traditional instruction in grammar. (CB)

  15. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. A. Wasiolek

    2003-01-01

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air inhaled by a receptor. Concentrations in air to which the

  16. Input-profile-based software failure probability quantification for safety signal generation systems

    International Nuclear Information System (INIS)

    Kang, Hyun Gook; Lim, Ho Gon; Lee, Ho Jung; Kim, Man Cheol; Jang, Seung Cheol

    2009-01-01

    The approaches for software failure probability estimation are mainly based on the results of testing. Test cases represent the inputs, which are encountered in an actual use. The test inputs for the safety-critical application such as a reactor protection system (RPS) of a nuclear power plant are the inputs which cause the activation of protective action such as a reactor trip. A digital system treats inputs from instrumentation sensors as discrete digital values by using an analog-to-digital converter. Input profile must be determined in consideration of these characteristics for effective software failure probability quantification. Another important characteristic of software testing is that we do not have to repeat the test for the same input value since the software response is deterministic for each specific digital input. With these considerations, we propose an effective software testing method for quantifying the failure probability. As an example application, the input profile of the digital RPS is developed based on the typical plant data. The proposed method in this study is expected to provide a simple but realistic mean to quantify the software failure probability based on input profile and system dynamics.

  17. Control Board Digital Interface Input Devices – Touchscreen, Trackpad, or Mouse?

    Energy Technology Data Exchange (ETDEWEB)

    Thomas A. Ulrich; Ronald L. Boring; Roger Lew

    2015-08-01

    The authors collaborated with a power utility to evaluate input devices for use in the human system interface (HSI) for a new digital Turbine Control System (TCS) at a nuclear power plant (NPP) undergoing a TCS upgrade. A standalone dynamic software simulation of the new digital TCS and a mobile kiosk were developed to conduct an input device study to evaluate operator preference and input device effectiveness. The TCS software presented the anticipated HSI for the TCS and mimicked (i.e., simulated) the turbine systems’ responses to operator commands. Twenty-four licensed operators from the two nuclear power units participated in the study. Three input devices were tested: a trackpad, mouse, and touchscreen. The subjective feedback from the survey indicates the operators preferred the touchscreen interface. The operators subjectively rated the touchscreen as the fastest and most comfortable input device given the range of tasks they performed during the study, but also noted a lack of accuracy for selecting small targets. The empirical data suggest the mouse input device provides the most consistent performance for screen navigation and manipulating on screen controls. The trackpad input device was both empirically and subjectively found to be the least effective and least desired input device.

  18. A Convex Optimization Model and Algorithm for Retinex

    Directory of Open Access Journals (Sweden)

    Qing-Nan Zhao

    2017-01-01

    Full Text Available Retinex is a theory on simulating and explaining how human visual system perceives colors under different illumination conditions. The main contribution of this paper is to put forward a new convex optimization model for Retinex. Different from existing methods, the main idea is to rewrite a multiplicative form such that the illumination variable and the reflection variable are decoupled in spatial domain. The resulting objective function involves three terms including the Tikhonov regularization of the illumination component, the total variation regularization of the reciprocal of the reflection component, and the data-fitting term among the input image, the illumination component, and the reciprocal of the reflection component. We develop an alternating direction method of multipliers (ADMM to solve the convex optimization model. Numerical experiments demonstrate the advantages of the proposed model which can decompose an image into the illumination and the reflection components.

  19. Developments in model-based optimization and control distributed control and industrial applications

    CERN Document Server

    Grancharova, Alexandra; Pereira, Fernando

    2015-01-01

    This book deals with optimization methods as tools for decision making and control in the presence of model uncertainty. It is oriented to the use of these tools in engineering, specifically in automatic control design with all its components: analysis of dynamical systems, identification problems, and feedback control design. Developments in Model-Based Optimization and Control takes advantage of optimization-based formulations for such classical feedback design objectives as stability, performance and feasibility, afforded by the established body of results and methodologies constituting optimal control theory. It makes particular use of the popular formulation known as predictive control or receding-horizon optimization. The individual contributions in this volume are wide-ranging in subject matter but coordinated within a five-part structure covering material on: · complexity and structure in model predictive control (MPC); · collaborative MPC; · distributed MPC; · optimization-based analysis and desi...

  20. A new optimal seam method for seamless image stitching

    Science.gov (United States)

    Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng

    2017-07-01

    A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.

  1. Example of material accounting and verification of reprocessing input

    International Nuclear Information System (INIS)

    Koch, L.; Schoof, S.

    1981-01-01

    An example is described in this paper of material accounting at the reprocessing input point. Knowledge of the fuel history and chemical analyses of the spent fuel permitted concepts to be tested which have been developed for the determination of the input by the operator and for its verification by nuclear material safeguards with the intention of detecting a protracted as well as an abrupt diversion. Accuracies obtained for a material balance of a PWR fuel reprocessing campaign are given. 6 refs

  2. Pharmaceutical development and optimization of azithromycin suppository for paediatric use

    Science.gov (United States)

    Kauss, Tina; Gaubert, Alexandra; Boyer, Chantal; Ba, Boubakar B.; Manse, Muriel; Massip, Stephane; Léger, Jean-Michel; Fawaz, Fawaz; Lembege, Martine; Boiron, Jean-Michel; Lafarge, Xavier; Lindegardh, Niklas; White, Nicholas J.; Olliaro, Piero; Millet, Pascal; Gaudin, Karen

    2013-01-01

    Pharmaceutical development and manufacturing process optimization work was undertaken in order to propose a potential paediatric rectal formulation of azithromycin as an alternative to existing oral or injectable formulations. The target product profile was to be easy-to-use, cheap and stable in tropical conditions, with bioavailability comparable to oral forms, rapidly achieving and maintaining bactericidal concentrations. PEG solid solution suppositories were characterized in vitro using visual, HPLC, DSC, FTIR and XRD analyses. In vitro drug release and in vivo bioavailability were assessed; a study in rabbits compared the bioavailability of the optimized solid solution suppository to rectal solution and intra-venous product (as reference) and to the previous, non-optimized formulation (suspended azithromycin suppository). The bioavailability of azithromycin administered as solid solution suppositories relative to intra-venous was 43%, which compared well to the target of 38% (oral product in humans). The results of 3-month preliminary stability and feasibility studies were consistent with industrial production scale-up. This product has potential both as a classical antibiotic and as a product for use in severely ill children in rural areas. Industrial partners for further development are being sought. PMID:23220079

  3. Pharmaceutical development and optimization of azithromycin suppository for paediatric use.

    Science.gov (United States)

    Kauss, Tina; Gaubert, Alexandra; Boyer, Chantal; Ba, Boubakar B; Manse, Muriel; Massip, Stephane; Léger, Jean-Michel; Fawaz, Fawaz; Lembege, Martine; Boiron, Jean-Michel; Lafarge, Xavier; Lindegardh, Niklas; White, Nicholas J; Olliaro, Piero; Millet, Pascal; Gaudin, Karen

    2013-01-30

    Pharmaceutical development and manufacturing process optimization work was undertaken in order to propose a potential paediatric rectal formulation of azithromycin as an alternative to existing oral or injectable formulations. The target product profile was to be easy-to-use, cheap and stable in tropical conditions, with bioavailability comparable to oral forms, rapidly achieving and maintaining bactericidal concentrations. PEG solid solution suppositories were characterized in vitro using visual, HPLC, DSC, FTIR and XRD analyses. In vitro drug release and in vivo bioavailability were assessed; a study in rabbits compared the bioavailability of the optimized solid solution suppository to rectal solution and intra-venous product (as reference) and to the previous, non-optimized formulation (suspended azithromycin suppository). The bioavailability of azithromycin administered as solid solution suppositories relative to intra-venous was 43%, which compared well to the target of 38% (oral product in humans). The results of 3-month preliminary stability and feasibility studies were consistent with industrial production scale-up. This product has potential both as a classical antibiotic and as a product for use in severely ill children in rural areas. Industrial partners for further development are being sought. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Model Optimization Identification Method Based on Closed-loop Operation Data and Process Characteristics Parameters

    Directory of Open Access Journals (Sweden)

    Zhiqiang GENG

    2014-01-01

    Full Text Available Output noise is strongly related to input in closed-loop control system, which makes model identification of closed-loop difficult, even unidentified in practice. The forward channel model is chosen to isolate disturbance from the output noise to input, and identified by optimization the dynamic characteristics of the process based on closed-loop operation data. The characteristics parameters of the process, such as dead time and time constant, are calculated and estimated based on the PI/PID controller parameters and closed-loop process input/output data. And those characteristics parameters are adopted to define the search space of the optimization identification algorithm. PSO-SQP optimization algorithm is applied to integrate the global search ability of PSO with the local search ability of SQP to identify the model parameters of forward channel. The validity of proposed method has been verified by the simulation. The practicability is checked with the PI/PID controller parameter turning based on identified forward channel model.

  5. Genetic algorithms used to optimize an artificial neural network design used in neutron spectrometry

    International Nuclear Information System (INIS)

    Arteaga A, T.; Ortiz R, J. M.; Vega C, H. R.

    2016-10-01

    Artificial neural networks (Ann) are widely used; it which consist of an input layer, one or more hidden layers and an output layer; these layers contain neurons and each has connections called weights, where the knowledge are allowed and let to Ann solve problems proposed. These Ann is used to reconstruction of the energy spectrum of neutrons from count rates and develop Bonner sphere neutron dosimetry. Currently, we have developed Ann with high performance and generalization ability. Determine your optimal architecture is usually a difficult task, an exhaustive search of all possible combinations of parameters is rarely possible further training of the neural network with random initial weights can cause two major drawbacks: it can stuck in local minima or converge very slowly. In this project it will be used Genetic Algorithms (Ga); which are based on the principle or analogy of evolution through natural selection and has shown to be very effective in optimizing complex search functions and large spaces or to find a near optimal overall efficiency. The aim is to decrease the architecture in number of hidden neurons and therefore the total number of connections is reducing. The benefits obtained by optimizing the network are that the number of connections would be considerably smaller and thus the computational complexity, hardware integration, resources will be lower such that will allow to be even more viable implemented. To use the Ga three problems must be solve: 1) coding the problem into chromosomes. 2) Construct a fitness function. 3) Proper selection of genetic operators; crossover, selection, mutation. As a result, the scientific knowledge obtained can to be applied to similar problems having a reference parameters used and their impact on the optimization would to be generated. It concluded that the input layer and output are subject to the problem; the Ga propose the optimal number of neurons in the hidden layer without losing the quality of the

  6. The input of geomorphology to oil-related developments in Shetland and Northeast Scotland

    International Nuclear Information System (INIS)

    Ritchie, W.

    1991-01-01

    In essence, the input of coastal geomorphology to most oil-related developments at the coastline has been descriptive environmental classification. The uses to which this information has been put are twofold: (1) as background reconnaissance data that are prepared in advance of a development, such as the exploitation of a nearshore drilling lease or a pipeline landfall, and (2) as a basic element in oil spill contingency mapping. A more specialized use of geomorphology has been environmental management advice relating to the construction, restoration, and operation of large-diameter oil and gas pipeline landfalls - all of which make their approach in Northeast Scotland through beach and dune complexes. The techniques consist of traditional morphological mapping considering form, aspect, materials, energy, and estimations of contemporary processes. Implicit in this mapping is the recognition of vulnerability which, in turn, relates closely to habitat recognition. Time is rarely available for process-type measurements. There is also a dependence on existing maps, aerial photographs, and reports. The survey may be done on foot, from boats, fixed-wing aircraft, or helicopters. Airborne video is increasingly being used as a supplementary means of data acquisition. Vertical airborne video used with an image-processing and G.I.S. system shows great potential and has been used experimentally for pipeline route selection

  7. Boosting antibody developability through rational sequence optimization.

    Science.gov (United States)

    Seeliger, Daniel; Schulz, Patrick; Litzenburger, Tobias; Spitz, Julia; Hoerer, Stefan; Blech, Michaela; Enenkel, Barbara; Studts, Joey M; Garidel, Patrick; Karow, Anne R

    2015-01-01

    The application of monoclonal antibodies as commercial therapeutics poses substantial demands on stability and properties of an antibody. Therapeutic molecules that exhibit favorable properties increase the success rate in development. However, it is not yet fully understood how the protein sequences of an antibody translates into favorable in vitro molecule properties. In this work, computational design strategies based on heuristic sequence analysis were used to systematically modify an antibody that exhibited a tendency to precipitation in vitro. The resulting series of closely related antibodies showed improved stability as assessed by biophysical methods and long-term stability experiments. As a notable observation, expression levels also improved in comparison with the wild-type candidate. The methods employed to optimize the protein sequences, as well as the biophysical data used to determine the effect on stability under conditions commonly used in the formulation of therapeutic proteins, are described. Together, the experimental and computational data led to consistent conclusions regarding the effect of the introduced mutations. Our approach exemplifies how computational methods can be used to guide antibody optimization for increased stability.

  8. Optimization of the electron collection efficiency of a large area MCP-PMT for the JUNO experiment

    International Nuclear Information System (INIS)

    Chen, Lin; Tian, Jinshou; Liu, Chunliang; Wang, Yifang; Zhao, Tianchi; Liu, Hulin; Wei, Yonglin; Sai, Xiaofeng; Chen, Ping; Wang, Xing; Lu, Yu; Hui, Dandan; Guo, Lehui; Liu, Shulin; Qian, Sen; Xia, Jingkai; Yan, Baojun; Zhu, Na; Sun, Jianning; Si, Shuguang

    2016-01-01

    A novel large-area (20-inch) photomultiplier tube based on microchannel plate (MCP-PMTs) is proposed for the Jiangmen Underground Neutrino Observatory (JUNO) experiment. Its photoelectron collection efficiency C e is limited by the MCP open area fraction (A open ). This efficiency is studied as a function of the angular (θ), energy (E) distributions of electrons in the input charge cloud and the potential difference (U) between the PMT photocathode and the MCP input surface, considering secondary electron emission from the MCP input electrode. In CST Studio Suite, Finite Integral Technique and Monte Carlo method are combined to investigate the dependence of C e on θ, E and U. Results predict that C e can exceed A open , and are applied to optimize the structure and operational parameters of the 20-inch MCP-PMT prototype. C e of the optimized MCP-PMT is expected to reach 81.2%. Finally, the reduction of the penetration depth of the MCP input electrode layer and the deposition of a high secondary electron yield material on the MCP are proposed to further optimize C e .

  9. Optimal Product Variety, Scale Effects and Growth

    NARCIS (Netherlands)

    de Groot, H.L.F.; Nahuis, R.

    1997-01-01

    We analyze the social optimality of growth and product variety in a model of endogenous growth. The model contains two sectors, one assembly sector producing a homogenous consumption good, and one intermediate goods sector producing a differentiated input used in the assembly sector. Growth results

  10. Space Vector Modulation for an Indirect Matrix Converter with Improved Input Power Factor

    Directory of Open Access Journals (Sweden)

    Nguyen Dinh Tuyen

    2017-04-01

    Full Text Available Pulse width modulation strategies have been developed for indirect matrix converters (IMCs in order to improve their performance. In indirect matrix converters, the LC input filter is used to remove input current harmonics and electromagnetic interference problems. Unfortunately, due to the existence of the input filter, the input power factor is diminished, especially during operation at low voltage outputs. In this paper, a new space vector modulation (SVM is proposed to compensate for the input power factor of the indirect matrix converter. Both computer simulation and experimental studies through hardware implementation were performed to verify the effectiveness of the proposed modulation strategy.

  11. Optimization based tuning approach for offset free MPC

    DEFF Research Database (Denmark)

    Olesen, Daniel Haugård; Huusom, Jakob Kjøbsted; Jørgensen, John Bagterp

    2012-01-01

    We present an optimization based tuning procedure with certain robustness properties for an offset free Model Predictive Controller (MPC). The MPC is designed for multivariate processes that can be represented by an ARX model. The advantage of ARX model representations is that standard system...... identifiation techniques using convex optimization can be used for identification of such models from input-output data. The stochastic model of the ARX model identified from input-output data is modified with an ARMA model designed as part of the MPC-design procedure to ensure offset-free control. The ARMAX...... model description resulting from the extension can be realized as a state space model in innovation form. The MPC is designed and implemented based on this state space model in innovation form. Expressions for the closed-loop dynamics of the unconstrained system is used to derive the sensitivity...

  12. Sensory Synergy as Environmental Input Integration

    Directory of Open Access Journals (Sweden)

    Fady eAlnajjar

    2015-01-01

    Full Text Available The development of a method to feed proper environmental inputs back to the central nervous system (CNS remains one of the challenges in achieving natural movement when part of the body is replaced with an artificial device. Muscle synergies are widely accepted as a biologically plausible interpretation of the neural dynamics between the CNS and the muscular system. Yet the sensorineural dynamics of environmental feedback to the CNS has not been investigated in detail. In this study, we address this issue by exploring the concept of sensory synergy. In contrast to muscle synergy, we hypothesize that sensory synergy plays an essential role in integrating the overall environmental inputs to provide low-dimensional information to the CNS. We assume that sensor synergy and muscle synergy communicate using these low-dimensional signals. To examine our hypothesis, we conducted posture control experiments involving lateral disturbance with 9 healthy participants. Proprioceptive information represented by the changes on muscle lengths were estimated by using the musculoskeletal model analysis software SIMM. Changes on muscles lengths were then used to compute sensory synergies. The experimental results indicate that the environmental inputs were translated into the two dimensional signals and used to move the upper limb to the desired position immediately after the lateral disturbance. Participants who showed high skill in posture control were found to be likely to have a strong correlation between sensory and muscle signaling as well as high coordination between the utilized sensory synergies. These results suggest the importance of integrating environmental inputs into suitable low-dimensional signals before providing them to the CNS. This mechanism should be essential when designing the prosthesis’ sensory system to make the controller simpler

  13. Sensory synergy as environmental input integration.

    Science.gov (United States)

    Alnajjar, Fady; Itkonen, Matti; Berenz, Vincent; Tournier, Maxime; Nagai, Chikara; Shimoda, Shingo

    2014-01-01

    The development of a method to feed proper environmental inputs back to the central nervous system (CNS) remains one of the challenges in achieving natural movement when part of the body is replaced with an artificial device. Muscle synergies are widely accepted as a biologically plausible interpretation of the neural dynamics between the CNS and the muscular system. Yet the sensorineural dynamics of environmental feedback to the CNS has not been investigated in detail. In this study, we address this issue by exploring the concept of sensory synergy. In contrast to muscle synergy, we hypothesize that sensory synergy plays an essential role in integrating the overall environmental inputs to provide low-dimensional information to the CNS. We assume that sensor synergy and muscle synergy communicate using these low-dimensional signals. To examine our hypothesis, we conducted posture control experiments involving lateral disturbance with nine healthy participants. Proprioceptive information represented by the changes on muscle lengths were estimated by using the musculoskeletal model analysis software SIMM. Changes on muscles lengths were then used to compute sensory synergies. The experimental results indicate that the environmental inputs were translated into the two dimensional signals and used to move the upper limb to the desired position immediately after the lateral disturbance. Participants who showed high skill in posture control were found to be likely to have a strong correlation between sensory and muscle signaling as well as high coordination between the utilized sensory synergies. These results suggest the importance of integrating environmental inputs into suitable low-dimensional signals before providing them to the CNS. This mechanism should be essential when designing the prosthesis' sensory system to make the controller simpler.

  14. Optimal Dimensioning of Broadband Integrated Services Digital Network

    Directory of Open Access Journals (Sweden)

    Zdenka Chmelikova

    2005-01-01

    Full Text Available Tasks of this paper are research of input flow statistic parametres influence and parameters demands relating to VP (Virtual Path or VC (Virtual Channel dimensioning. However it is necessary to consider different time of flow arrival and differend time of holding time during connection level. Process of input flow arrival is considered as Poisson process. Holding time is considered as exponential function. Permanent allocation of VP is made by separate VP, where each of VPs enable transmitting offered laod with explicit bandwidth and explicit loss. The mathematic model was created to verify the above mentioned dependences for different types of telecomunications signals and different input flows. "Comnet III" software was selected for experimental verification of process optimization. The simulation model was based on thiss software, which simulate ATM network traffic in behalf of different input flow.

  15. Forecasting the development of regional economy on the basis of input — output tables

    Directory of Open Access Journals (Sweden)

    Yury Konstantinovich Mashunin

    2014-06-01

    Full Text Available The article presents a practical technology of forecasting the development of the regional economy, including the statement of the problem, the construction of a mathematical model, and its implementation. At the constructing of a model, the standard statistical data for the previous period (2011, built on the basis of the table “input — output” are used. A unit of output of final demand, resulting from investments is added. As a result, a model of the regional economy made in the form of a vector mathematical programming problem that takes into account the investment processes in a region is obtained. Its purpose is to maximize the production of final demand in a region (all industries in a region within the constraints of the input-output balance, investments, resource costs and capacities. For solving linear programming problems of vector, methods, based on the principle of normalization criteria and guaranteed result are used. Vector dynamics problem is solved in a specified number of years. The factors taking into account the rate of growth: gross volumes (resources, final demand, investment in every sector of the region are introduced. Numerical implementation of the prediction is shown in the test case economic modeling of Primorsky Krai, including fifteen branches of a three-year period in accordance with the requirements of the Budget Code. Results of the solution include the major economic indicators for a region: gross, gross regional product (GRP, investments (including broken by industry, as well as payroll taxes and other. All these economic indicators are the basis for the formation of budget revenues in a region.

  16. An input-to-state stability approach to verify almost global stability of a synchronous-machine-infinite-bus system.

    Science.gov (United States)

    Schiffer, Johannes; Efimov, Denis; Ortega, Romeo; Barabanov, Nikita

    2017-08-13

    Conditions for almost global stability of an operating point of a realistic model of a synchronous generator with constant field current connected to an infinite bus are derived. The analysis is conducted by employing the recently proposed concept of input-to-state stability (ISS)-Leonov functions, which is an extension of the powerful cell structure principle developed by Leonov and Noldus to the ISS framework. Compared with the original ideas of Leonov and Noldus, the ISS-Leonov approach has the advantage of providing additional robustness guarantees. The efficiency of the derived sufficient conditions is illustrated via numerical experiments.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).

  17. Multi-input wide dynamic range ADC system for use with nuclear detectors

    Energy Technology Data Exchange (ETDEWEB)

    Austin, R W [National Aeronautics and Space Administration, Huntsville, Ala. (USA). George C. Marshall Space Flight Center

    1976-04-15

    A wide dynamic range, eight input analog-to-digital converter system has been developed for use in nuclear experiments. The system consists of eight dual-range sample and hold modules, an eight input multiplexer, a ten-bit analog-to-digital converter, and the associated control logic.

  18. A joint routing and speed optimization problem

    OpenAIRE

    Fukasawa, Ricardo; He, Qie; Santos, Fernando; Song, Yongjia

    2016-01-01

    Fuel cost contributes to a significant portion of operating cost in cargo transportation. Though classic routing models usually treat fuel cost as input data, fuel consumption heavily depends on the travel speed, which has led to the study of optimizing speeds over a given fixed route. In this paper, we propose a joint routing and speed optimization problem to minimize the total cost, which includes the fuel consumption cost. The only assumption made on the dependence between the fuel cost an...

  19. Quasi-dynamic walk of a quadruped locomotion robot using optimal tracking control

    International Nuclear Information System (INIS)

    Uchida, Hiroaki; Nonami, Kenzo; Chiba, Yasunori; Koyama, Kakutaro.

    1994-01-01

    Recently, many research works of quadruped locomotion robots, which are considered to be operable on irregular terrain, have been carried out. In the case of realizing ideal motion control of the quadruped locomotion robot, it is assumed that hierarchical cooperative control consisting of decentralized control and centralized control is desirable. In the case that the locomotion robot moves at high speed, it is impossible to follow the desired trajectory because using only the feedback control method includes time delay. It is known that feedforward control input is valid for such motion control. In this paper, decentralized control is realized to apply optimal tracking control using feedforward control input to the quadruped locomotion robot, as the first step. As a result, it is determined that the angle variation of the foot and the stride applying optimal tracking control input are large compared with using only feedback control. It is verified that feedforward control input is useful to control the trajectory of the tip of the foot in high speed locomotion. (author)

  20. Developing an optimal valve closing rule curve for real-time pressure control in pipes

    Energy Technology Data Exchange (ETDEWEB)

    Bazarganlari, Mohammad Reza; Afshar, Hossein [Islamic Azad University, Tehran (Iran, Islamic Republic of); Kerachian, Reza [University of Tehran, Tehran (Iran, Islamic Republic of); Bashiazghadi, Seyyed Nasser [Iran University of Science and Technology, Tehran (Iran, Islamic Republic of)

    2013-01-15

    Sudden valve closure in pipeline systems can cause high pressures that may lead to serious damages. Using an optimal valve closing rule can play an important role in managing extreme pressures in sudden valve closure. In this paper, an optimal closing rule curve is developed using a multi-objective optimization model and Bayesian networks (BNs) for controlling water pressure in valve closure instead of traditional step functions or single linear functions. The method of characteristics is used to simulate transient flow caused by valve closure. Non-dominated sorting genetic algorithms-II is also used to develop a Pareto front among three objectives related to maximum and minimum water pressures, and the amount of water passes through the valve during the valve-closing process. Simulation and optimization processes are usually time-consuming, thus results of the optimization model are used for training the BN. The trained BN is capable of determining optimal real-time closing rules without running costly simulation and optimization models. To demonstrate its efficiency, the proposed methodology is applied to a reservoir-pipe-valve system and the optimal closing rule curve is calculated for the valve. The results of the linear and BN-based valve closure rules show that the latter can significantly reduce the range of variations in water hammer pressures.

  1. Fault detection for discrete-time switched systems with sensor stuck faults and servo inputs.

    Science.gov (United States)

    Zhong, Guang-Xin; Yang, Guang-Hong

    2015-09-01

    This paper addresses the fault detection problem of switched systems with servo inputs and sensor stuck faults. The attention is focused on designing a switching law and its associated fault detection filters (FDFs). The proposed switching law uses only the current states of FDFs, which guarantees the residuals are sensitive to the servo inputs with known frequency ranges in faulty cases and robust against them in fault-free case. Thus, the arbitrarily small sensor stuck faults, including outage faults can be detected in finite-frequency domain. The levels of sensitivity and robustness are measured in terms of the finite-frequency H- index and l2-gain. Finally, the switching law and FDFs are obtained by the solution of a convex optimization problem. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Spatial Optimization of Future Urban Development with Regards to Climate Risk and Sustainability Objectives.

    Science.gov (United States)

    Caparros-Midwood, Daniel; Barr, Stuart; Dawson, Richard

    2017-11-01

    Future development in cities needs to manage increasing populations, climate-related risks, and sustainable development objectives such as reducing greenhouse gas emissions. Planners therefore face a challenge of multidimensional, spatial optimization in order to balance potential tradeoffs and maximize synergies between risks and other objectives. To address this, a spatial optimization framework has been developed. This uses a spatially implemented genetic algorithm to generate a set of Pareto-optimal results that provide planners with the best set of trade-off spatial plans for six risk and sustainability objectives: (i) minimize heat risks, (ii) minimize flooding risks, (iii) minimize transport travel costs to minimize associated emissions, (iv) maximize brownfield development, (v) minimize urban sprawl, and (vi) prevent development of greenspace. The framework is applied to Greater London (U.K.) and shown to generate spatial development strategies that are optimal for specific objectives and differ significantly from the existing development strategies. In addition, the analysis reveals tradeoffs between different risks as well as between risk and sustainability objectives. While increases in heat or flood risk can be avoided, there are no strategies that do not increase at least one of these. Tradeoffs between risk and other sustainability objectives can be more severe, for example, minimizing heat risk is only possible if future development is allowed to sprawl significantly. The results highlight the importance of spatial structure in modulating risks and other sustainability objectives. However, not all planning objectives are suited to quantified optimization and so the results should form part of an evidence base to improve the delivery of risk and sustainability management in future urban development. © 2017 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  3. Input filter compensation for switching regulators

    Science.gov (United States)

    Lee, F. C.; Kelkar, S. S.

    1982-01-01

    The problems caused by the interaction between the input filter, output filter, and the control loop are discussed. The input filter design is made more complicated because of the need to avoid performance degradation and also stay within the weight and loss limitations. Conventional input filter design techniques are then dicussed. The concept of pole zero cancellation is reviewed; this concept is the basis for an approach to control the peaking of the output impedance of the input filter and thus mitigate some of the problems caused by the input filter. The proposed approach for control of the peaking of the output impedance of the input filter is to use a feedforward loop working in conjunction with feedback loops, thus forming a total state control scheme. The design of the feedforward loop for a buck regulator is described. A possible implementation of the feedforward loop design is suggested.

  4. Development of Optimal Stressor Scenarios for New Operational Energy Systems

    Science.gov (United States)

    2017-12-01

    OPTIMAL STRESSOR SCENARIOS FOR NEW OPERATIONAL ENERGY SYSTEMS by Geoffrey E. Fastabend December 2017 Thesis Advisor: Alejandro S... ENERGY SYSTEMS 5. FUNDING NUMBERS 6. AUTHOR(S) Geoffrey E. Fastabend 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School...developed and tested simulation model for operational energy related systems in order to develop better stressor scenarios for acceptance testing

  5. The effectiveness of visual input enhancement on the noticing and L2 development of the Spanish past tense

    Directory of Open Access Journals (Sweden)

    Shawn Loewen

    2016-03-01

    Full Text Available Textual manipulation is a common pedagogic tool used to emphasize specific features of a second language (L2 text, thereby facilitating noticing and, ideally, second language development. Visual input enhancement has been used to investigate the effects of highlighting specific grammatical structures in a text. The current study uses a quasi-experimental design to determine the extent to which textual manipulation increase (a learners’ perception of targeted forms and (b their knowledge of the forms. Input enhancement was used to highlight the Spanish preterit and imperfect verb forms and an eye tracker measured the frequency and duration of participants’ fixation on the targeted items. In addition, pretests and posttests of the Spanish past tense provided information about participants’ knowledge of the targeted forms. Results indicate that learners were aware of the highlighted grammatical forms in the text; however, there was no difference in the amount of attention between the enhanced and unenhanced groups. In addition, both groups improved in their knowledge of the L2 forms; however, again, there was no differential improvement between the two groups.

  6. Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio-Inspired Optimization Techniques

    Science.gov (United States)

    2017-11-01

    on Bio -Inspired Optimization Techniques by Canh Ly, Nghia Tran, and Ozlem Kilic Approved for public release; distribution is...Research Laboratory Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio -Inspired Optimization Techniques by...SUBTITLE Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio -Inspired Optimization Techniques 5a. CONTRACT NUMBER

  7. Global Optimization Ensemble Model for Classification Methods

    Science.gov (United States)

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  8. Global Optimization Ensemble Model for Classification Methods

    Directory of Open Access Journals (Sweden)

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  9. A Novel Approach to Develop the Lower Order Model of Multi-Input Multi-Output System

    Science.gov (United States)

    Rajalakshmy, P.; Dharmalingam, S.; Jayakumar, J.

    2017-10-01

    A mathematical model is a virtual entity that uses mathematical language to describe the behavior of a system. Mathematical models are used particularly in the natural sciences and engineering disciplines like physics, biology, and electrical engineering as well as in the social sciences like economics, sociology and political science. Physicists, Engineers, Computer scientists, and Economists use mathematical models most extensively. With the advent of high performance processors and advanced mathematical computations, it is possible to develop high performing simulators for complicated Multi Input Multi Ouptut (MIMO) systems like Quadruple tank systems, Aircrafts, Boilers etc. This paper presents the development of the mathematical model of a 500 MW utility boiler which is a highly complex system. A synergistic combination of operational experience, system identification and lower order modeling philosophy has been effectively used to develop a simplified but accurate model of a circulation system of a utility boiler which is a MIMO system. The results obtained are found to be in good agreement with the physics of the process and with the results obtained through design procedure. The model obtained can be directly used for control system studies and to realize hardware simulators for boiler testing and operator training.

  10. 7 CFR 3430.607 - Stakeholder input.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) COOPERATIVE STATE RESEARCH, EDUCATION... § 3430.607 Stakeholder input. CSREES shall seek and obtain stakeholder input through a variety of forums...

  11. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This

  12. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2006-01-01

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the

  13. PC-based input/output controllers from a VME perspective

    International Nuclear Information System (INIS)

    Hill, J.O.

    1999-01-01

    The Experimental Physics and Industrial Control System (EPICS) has been widely adopted in the accelerator community. Although EPICS is available on many platforms, the majority of sites have deployed VME- or VXI-based input output controllers running the vxWorks real time operating system. Recently, a hybrid approach using vxWorks on both PC and traditional platforms is being implemented at LANL. To illustrate these developments the author compares his recent experience deploying PC-based EPICS input output controllers with experience deploying similar systems based on traditional EPICS platforms

  14. World Input-Output Network.

    Directory of Open Access Journals (Sweden)

    Federica Cerina

    Full Text Available Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD is one of the first efforts to construct the global multi-regional input-output (GMRIO tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries.

  15. An Optimized Elasto-Plastic Subgrade Reaction For Modeling The Response Of A Nonlinear Foundation For A Structural Analysis

    Directory of Open Access Journals (Sweden)

    Ray Richard Paul

    2015-09-01

    Full Text Available Geotechnical and structural engineers are faced with a difficult task when their designs interact with each other. For complex projects, this is more the norm than the exception. In order to help bridge that gap, a method for modeling the behavior of a foundation using a simple elasto-plastic subgrade reaction was developed. The method uses an optimization technique to position 4-6 springs along a pile foundation to produce similar load deflection characteristics that were modeled by more sophisticated geotechnical finite element software. The methodology uses an Excel spreadsheet for accepting user input and delivering an optimized subgrade spring stiffness, yield, and position along the pile. In this way, the behavior developed from the geotechnical software can be transferred to the structural analysis software. The optimization is achieved through the solver add-in within Excel. Additionally, a beam on a nonlinear elastic foundation model is used to compute deflections of the optimized subgrade reaction configuration.

  16. 7 CFR 3430.15 - Stakeholder input.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.15 Section 3430.15... Stakeholder input. Section 103(c)(2) of the Agricultural Research, Extension, and Education Reform Act of 1998... RFAs for competitive programs. CSREES will provide instructions for submission of stakeholder input in...

  17. The optimal performance of a quantum refrigeration cycle working with harmonic oscillators

    International Nuclear Information System (INIS)

    Lin Bihong; Chen Jincan; Hua Ben

    2003-01-01

    The cycle model of a quantum refrigeration cycle working with many non-interacting harmonic oscillators and consisting of two isothermal and two constant-frequency processes is established. Based on the quantum master equation and semi-group approach, the general performance of the cycle is investigated. Expressions for some important performance parameters, such as the coefficient of performance, cooling rate, power input, and rate of the entropy production, are derived. Several interesting cases are discussed and, especially, the optimal performance of the cycle at high temperatures is discussed in detail. Some important characteristic curves of the cycle, such as the cooling rate versus coefficient of performance curves, the power input versus coefficient of performance curves, the cooling rate versus power input curves, and so on, are presented. The maximum cooling rate and the corresponding coefficient of performance are calculated. Other optimal performances are also analysed. The results obtained here are compared with those of an Ericsson or Stirling refrigeration cycle using an ideal gas as the working substance. Finally, the optimal performance of a harmonic quantum Carnot refrigeration cycle at high temperatures is derived easily

  18. Risky play and children's safety: balancing priorities for optimal child development.

    Science.gov (United States)

    Brussoni, Mariana; Olsen, Lise L; Pike, Ian; Sleet, David A

    2012-08-30

    Injury prevention plays a key role in keeping children safe, but emerging research suggests that imposing too many restrictions on children's outdoor risky play hinders their development. We explore the relationship between child development, play, and conceptions of risk taking with the aim of informing child injury prevention. Generational trends indicate children's diminishing engagement in outdoor play is influenced by parental and societal concerns. We outline the importance of play as a necessary ingredient for healthy child development and review the evidence for arguments supporting the need for outdoor risky play, including: (1) children have a natural propensity towards risky play; and, (2) keeping children safe involves letting them take and manage risks. Literature from many disciplines supports the notion that safety efforts should be balanced with opportunities for child development through outdoor risky play. New avenues for investigation and action are emerging seeking optimal strategies for keeping children "as safe as necessary," not "as safe as possible." This paradigm shift represents a potential for epistemological growth as well as cross-disciplinary collaboration to foster optimal child development while preserving children's safety.

  19. Concepts in production ecology for analysis and quantification of agricultural input-output combinations.

    NARCIS (Netherlands)

    Ittersum, van M.K.; Rabbinge, R.

    1997-01-01

    Definitions and concepts of production ecology are presented as a basis for development of alternative production technologies characterized by their input-output combinations. With these concepts the relative importance of several growth factors and inputs is investigated to explain actual yield

  20. Input description for BIOPATH

    International Nuclear Information System (INIS)

    Marklund, J.E.; Bergstroem, U.; Edlund, O.

    1980-01-01

    The computer program BIOPATH describes the flow of radioactivity within a given ecosystem after a postulated release of radioactive material and the resulting dose for specified population groups. The present report accounts for the input data necessary to run BIOPATH. The report also contains descriptions of possible control cards and an input example as well as a short summary of the basic theory.(author)

  1. Applications of an alternative formulation for one-layer real time optimization

    Directory of Open Access Journals (Sweden)

    Schiavon Júnior A.L.

    2000-01-01

    Full Text Available This paper presents two applications of an alternative formulation for one-layer real time structure for control and optimization. This new formulation have arisen from predictive controller QDMC (Quadratic Dynamic Matrix Control, a type of predictive control (Model Predictive Control - MPC. At each sampling time, the values of the outputs of process are fed into the optimization-control structure which supplies the new values of the manipulated variables already considering the best conditions of process. The variables of optimization are both set-point changes and control actions. The future stationary outputs and the future stationary control actions have both a different formulation of conventional one-layer structure and they are calculated from the inverse gain matrix of the process. This alternative formulation generates a convex problem, which can be solved by less sophisticated optimization algorithms. Linear and nonlinear economic objective functions were considered. The proposed approach was applied to two linear models, one SISO (single-input/single output and the other MIMO (multiple-input/multiple-output. The results showed an excellent performance.

  2. ANN-PSO Integrated Optimization Methodology for Intelligent Control of MMC Machining

    Science.gov (United States)

    Chandrasekaran, Muthumari; Tamang, Santosh

    2017-08-01

    Metal Matrix Composites (MMC) show improved properties in comparison with non-reinforced alloys and have found increased application in automotive and aerospace industries. The selection of optimum machining parameters to produce components of desired surface roughness is of great concern considering the quality and economy of manufacturing process. In this study, a surface roughness prediction model for turning Al-SiCp MMC is developed using Artificial Neural Network (ANN). Three turning parameters viz., spindle speed ( N), feed rate ( f) and depth of cut ( d) were considered as input neurons and surface roughness was an output neuron. ANN architecture having 3 -5 -1 is found to be optimum and the model predicts with an average percentage error of 7.72 %. Particle Swarm Optimization (PSO) technique is used for optimizing parameters to minimize machining time. The innovative aspect of this work is the development of an integrated ANN-PSO optimization method for intelligent control of MMC machining process applicable to manufacturing industries. The robustness of the method shows its superiority for obtaining optimum cutting parameters satisfying desired surface roughness. The method has better convergent capability with minimum number of iterations.

  3. Instantaneous input electrical power measurements of HITU transducer

    International Nuclear Information System (INIS)

    Karaboece, B; Guelmez, Y; Rajagapol, S; Shaw, A

    2011-01-01

    HITU (High Intensity Theraupetic Ultrasound) transducers are widely used in therapeutic ultrasound in medicine. The output ultrasonic power of HITU transducer can be measured in number of methods described in IEC 61161 standard [1]. New IEC standards specifically for measurement of HITU equipment are under development. The ultrasound power radiated from a transducer is dependent on applied input electrical voltage and current and consequently power. But, up to now, no standardised method has been developed and adopted for the input electrical power measurements. Hence, a workpackage was carried out for the establishment of such method in the frequency range of 1 to 3 MHz as a part of EURAMET EMRP Era-net plus 'External Beam Cancer Therapy' project. Several current shunts were developed and evaluated. Current measurements were also realized with Philips current probe and preamplifier at NPL and Agilent current probe at UME. In this paper, a method for the measurement of instantaneous electrical power delivered to a reactive ultrasound transducer in the required frequency range is explored.

  4. Instantaneous input electrical power measurements of HITU transducer

    Energy Technology Data Exchange (ETDEWEB)

    Karaboece, B; Guelmez, Y [Tuebitak Ulusal Metroloji Enstituesue (UME), P.K. 54 41470 Gebze-Kocaeli (Turkey); Rajagapol, S; Shaw, A, E-mail: baki.karaboce@ume.tubitak.gov.t [National Physical Laboratory (NPL), Hampton Road, Teddington TW11 0LW (United Kingdom)

    2011-02-01

    HITU (High Intensity Theraupetic Ultrasound) transducers are widely used in therapeutic ultrasound in medicine. The output ultrasonic power of HITU transducer can be measured in number of methods described in IEC 61161 standard [1]. New IEC standards specifically for measurement of HITU equipment are under development. The ultrasound power radiated from a transducer is dependent on applied input electrical voltage and current and consequently power. But, up to now, no standardised method has been developed and adopted for the input electrical power measurements. Hence, a workpackage was carried out for the establishment of such method in the frequency range of 1 to 3 MHz as a part of EURAMET EMRP Era-net plus 'External Beam Cancer Therapy' project. Several current shunts were developed and evaluated. Current measurements were also realized with Philips current probe and preamplifier at NPL and Agilent current probe at UME. In this paper, a method for the measurement of instantaneous electrical power delivered to a reactive ultrasound transducer in the required frequency range is explored.

  5. A theoretical cost optimization model of reused flowback distribution network of regional shale gas development

    International Nuclear Information System (INIS)

    Li, Huajiao; An, Haizhong; Fang, Wei; Jiang, Meng

    2017-01-01

    The logistical issues surrounding the timing and transport of flowback generated by each shale gas well to the next is a big challenge. Due to more and more flowback being stored temporarily near the shale gas well and reused in the shale gas development, both transportation cost and storage cost are the heavy burden for the developers. This research proposed a theoretical cost optimization model to get the optimal flowback distribution solution for regional multi shale gas wells in a holistic perspective. Then, we used some empirical data of Marcellus Shale to do the empirical study. In addition, we compared the optimal flowback distribution solution by considering both the transportation cost and storage cost with the flowback distribution solution which only minimized the transportation cost or only minimized the storage cost. - Highlights: • A theoretical cost optimization model to get optimal flowback distribution solution. • An empirical study using the shale gas data in Bradford County of Marcellus Shale. • Visualization of optimal flowback distribution solutions under different scenarios. • Transportation cost is a more important factor for reducing the cost. • Help the developers to cut the storage and transportation cost of reusing flowback.

  6. Large-area landslide susceptibility with optimized slope-units

    Science.gov (United States)

    Alvioli, Massimiliano; Marchesini, Ivan; Reichenbach, Paola; Rossi, Mauro; Ardizzone, Francesca; Fiorucci, Federica; Guzzetti, Fausto

    2017-04-01

    A Slope-Unit (SU) is a type of morphological terrain unit bounded by drainage and divide lines that maximize the within-unit homogeneity and the between-unit heterogeneity across distinct physical and geographical boundaries [1]. Compared to other terrain subdivisions, SU are morphological terrain unit well related to the natural (i.e., geological, geomorphological, hydrological) processes that shape and characterize natural slopes. This makes SU easily recognizable in the field or in topographic base maps, and well suited for environmental and geomorphological analysis, in particular for landslide susceptibility (LS) modelling. An optimal subdivision of an area into a set of SU depends on multiple factors: size and complexity of the study area, quality and resolution of the available terrain elevation data, purpose of the terrain subdivision, scale and resolution of the phenomena for which SU are delineated. We use the recently developed r.slopeunits software [2,3] for the automatic, parametric delineation of SU within the open source GRASS GIS based on terrain elevation data and a small number of user-defined parameters. The software provides subdivisions consisting of SU with different shapes and sizes, as a function of the input parameters. In this work, we describe a procedure for the optimal selection of the user parameters through the production of a large number of realizations of the LS model. We tested the software and the optimization procedure in a 2,000 km2 area in Umbria, Central Italy. For LS zonation we adopt a logistic regression model implemented in an well-known software [4,5], using about 50 independent variables. To select the optimal SU partition for LS zonation, we want to define a metric which is able to quantify simultaneously: (i) slope-unit internal homogeneity (ii) slope-unit external heterogeneity (iii) landslide susceptibility model performance. To this end, we define a comprehensive objective function S, as the product of three

  7. Evaluating Optimism: Developing Children’s Version of Optimistic Attributional Style Questionnaire

    Directory of Open Access Journals (Sweden)

    Gordeeva T.O.,

    2017-08-01

    Full Text Available People differ significantly in how they usually explain to themselves the reasons of events, both positive and negative, that happen in their lives. Psychological research shows that children who tend to think optimistically have certain advantages as compared to their pessimistically thinking peers: they are less likely to suffer from depression, establish more positive relationships with peers, and demonstrate higher academic achievements. This paper describes the process of creating the children’s version of the Optimistic Attributional Style Questionnaire (OASQ-C. This technique is based on the theory of learned hopelessness and optimism developed by M. Seligman, L. Abramson and J. Teas dale and is an efficient (compact tool for measuring optimism as an explanatory style in children and adolescents (9-14 years. Confirmatory factor analysis revealed that this technique is a two-factor structure with acceptable reliability. Validity is supported by the presence of expected correlations between explanatory style and rates of psychological well-being, dispositional optimism, positive attitude to life and its aspects, depression, and academic performance. The outcomes of this technique are not affected by social desirability. The developed questionnaire may be recommended to researchers and school counsellors for evaluating optimism (optimistic thinking as one of the major factors in psychological well-being of children; it may also be used in assessing the effectiveness of cognitive oriented training for adolescents.

  8. Combined shape and topology optimization of 3D structures

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Bærentzen, Jakob Andreas; Nobel-Jørgensen, Morten

    2015-01-01

    We present a method for automatic generation of 3D models based on shape and topology optimization. The optimization procedure, or model generation process, is initialized by a set of boundary conditions, an objective function, constraints and an initial structure. Using this input, the method...... will automatically deform and change the topology of the initial structure such that the objective function is optimized subject to the specified constraints and boundary conditions. For example, this tool can be used to improve the stiffness of a structure before printing, reduce the amount of material needed...

  9. An example in linear quadratic optimal control

    NARCIS (Netherlands)

    Weiss, George; Zwart, Heiko J.

    1998-01-01

    We construct a simple example of a quadratic optimal control problem for an infinite-dimensional linear system based on a shift semigroup. This system has an unbounded control operator. The cost is quadratic in the input and the state, and the weighting operators are bounded. Despite its extreme

  10. Whole-Brain Monosynaptic Afferent Inputs to Basal Forebrain Cholinergic System

    Directory of Open Access Journals (Sweden)

    Rongfeng Hu

    2016-10-01

    Full Text Available The basal forebrain cholinergic system (BFCS robustly modulates many important behaviors, such as arousal, attention, learning and memory, through heavy projections to cortex and hippocampus. However, the presynaptic partners governing BFCS activity still remain poorly understood. Here, we utilized a recently developed rabies virus-based cell-type-specific retrograde tracing system to map the whole-brain afferent inputs of the BFCS. We found that the BFCS receives inputs from multiple cortical areas, such as orbital frontal cortex, motor cortex, and insular cortex, and that the BFCS also receives dense inputs from several subcortical nuclei related to motivation and stress, including lateral septum (LS, central amygdala (CeA, paraventricular nucleus of hypothalamus (PVH, dorsal raphe (DRN and parabrachial nucleus (PBN. Interestingly, we found that the BFCS receives inputs from the olfactory areas and the entorhinal-hippocampal system. These results greatly expand our knowledge about the connectivity of the mouse BFCS and provided important preliminary indications for future exploration of circuit function.

  11. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-09-24

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air

  12. Regulation of Dynamical Systems to Optimal Solutions of Semidefinite Programs: Algorithms and Applications to AC Optimal Power Flow

    Energy Technology Data Exchange (ETDEWEB)

    Dall' Anese, Emiliano; Dhople, Sairaj V.; Giannakis, Georgios B.

    2015-07-01

    This paper considers a collection of networked nonlinear dynamical systems, and addresses the synthesis of feedback controllers that seek optimal operating points corresponding to the solution of pertinent network-wide optimization problems. Particular emphasis is placed on the solution of semidefinite programs (SDPs). The design of the feedback controller is grounded on a dual e-subgradient approach, with the dual iterates utilized to dynamically update the dynamical-system reference signals. Global convergence is guaranteed for diminishing stepsize rules, even when the reference inputs are updated at a faster rate than the dynamical-system settling time. The application of the proposed framework to the control of power-electronic inverters in AC distribution systems is discussed. The objective is to bridge the time-scale separation between real-time inverter control and network-wide optimization. Optimization objectives assume the form of SDP relaxations of prototypical AC optimal power flow problems.

  13. The Treeterbi and Parallel Treeterbi algorithms: efficient, optimal decoding for ordinary, generalized and pair HMMs

    DEFF Research Database (Denmark)

    Keibler, Evan; Arumugam, Manimozhiyan; Brent, Michael R

    2007-01-01

    MOTIVATION: Hidden Markov models (HMMs) and generalized HMMs been successfully applied to many problems, but the standard Viterbi algorithm for computing the most probable interpretation of an input sequence (known as decoding) requires memory proportional to the length of the sequence, which can...... be prohibitive. Existing approaches to reducing memory usage either sacrifice optimality or trade increased running time for reduced memory. RESULTS: We developed two novel decoding algorithms, Treeterbi and Parallel Treeterbi, and implemented them in the TWINSCAN/N-SCAN gene-prediction system. The worst case...... asymptotic space and time are the same as for standard Viterbi, but in practice, Treeterbi optimally decodes arbitrarily long sequences with generalized HMMs in bounded memory without increasing running time. Parallel Treeterbi uses the same ideas to split optimal decoding across processors, dividing latency...

  14. Modeling and generating input processes

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M.E.

    1987-01-01

    This tutorial paper provides information relevant to the selection and generation of stochastic inputs to simulation studies. The primary area considered is multivariate but much of the philosophy at least is relevant to univariate inputs as well. 14 refs.

  15. A risk explicit interval linear programming model for uncertainty-based environmental economic optimization in the Lake Fuxian watershed, China.

    Science.gov (United States)

    Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan

    2013-01-01

    The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.

  16. A Risk Explicit Interval Linear Programming Model for Uncertainty-Based Environmental Economic Optimization in the Lake Fuxian Watershed, China

    Directory of Open Access Journals (Sweden)

    Xiaoling Zhang

    2013-01-01

    Full Text Available The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers’ preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.

  17. Opening the "Black Box" of efficiency measurement : Input allocation in multi-output settings

    NARCIS (Netherlands)

    Dierynck, B.; Cherchye, L.J.H.; Sabbe, J.; Roodhooft, F.; de Rock, B.

    2013-01-01

    We develop a new data envelopment analysis (DEA)-based methodology for measuring the efficiency of decision-making units (DMUs) characterized by multiple inputs and multiple outputs. The distinguishing feature of our method is that it explicitly includes information about output-specific inputs and

  18. Modeling Reservoir-River Networks in Support of Optimizing Seasonal-Scale Reservoir Operations

    Science.gov (United States)

    Villa, D. L.; Lowry, T. S.; Bier, A.; Barco, J.; Sun, A.

    2011-12-01

    HydroSCOPE (Hydropower Seasonal Concurrent Optimization of Power and the Environment) is a seasonal time-scale tool for scenario analysis and optimization of reservoir-river networks. Developed in MATLAB, HydroSCOPE is an object-oriented model that simulates basin-scale dynamics with an objective of optimizing reservoir operations to maximize revenue from power generation, reliability in the water supply, environmental performance, and flood control. HydroSCOPE is part of a larger toolset that is being developed through a Department of Energy multi-laboratory project. This project's goal is to provide conventional hydropower decision makers with better information to execute their day-ahead and seasonal operations and planning activities by integrating water balance and operational dynamics across a wide range of spatial and temporal scales. This presentation details the modeling approach and functionality of HydroSCOPE. HydroSCOPE consists of a river-reservoir network model and an optimization routine. The river-reservoir network model simulates the heat and water balance of river-reservoir networks for time-scales up to one year. The optimization routine software, DAKOTA (Design Analysis Kit for Optimization and Terascale Applications - dakota.sandia.gov), is seamlessly linked to the network model and is used to optimize daily volumetric releases from the reservoirs to best meet a set of user-defined constraints, such as maximizing revenue while minimizing environmental violations. The network model uses 1-D approximations for both the reservoirs and river reaches and is able to account for surface and sediment heat exchange as well as ice dynamics for both models. The reservoir model also accounts for inflow, density, and withdrawal zone mixing, and diffusive heat exchange. Routing for the river reaches is accomplished using a modified Muskingum-Cunge approach that automatically calculates the internal timestep and sub-reach lengths to match the conditions of

  19. Transport coefficient computation based on input/output reduced order models

    Science.gov (United States)

    Hurst, Joshua L.

    The guiding purpose of this thesis is to address the optimal material design problem when the material description is a molecular dynamics model. The end goal is to obtain a simplified and fast model that captures the property of interest such that it can be used in controller design and optimization. The approach is to examine model reduction analysis and methods to capture a specific property of interest, in this case viscosity, or more generally complex modulus or complex viscosity. This property and other transport coefficients are defined by a input/output relationship and this motivates model reduction techniques that are tailored to preserve input/output behavior. In particular Singular Value Decomposition (SVD) based methods are investigated. First simulation methods are identified that are amenable to systems theory analysis. For viscosity, these models are of the Gosling and Lees-Edwards type. They are high order nonlinear Ordinary Differential Equations (ODEs) that employ Periodic Boundary Conditions. Properties can be calculated from the state trajectories of these ODEs. In this research local linear approximations are rigorously derived and special attention is given to potentials that are evaluated with Periodic Boundary Conditions (PBC). For the Gosling description LTI models are developed from state trajectories but are found to have limited success in capturing the system property, even though it is shown that full order LTI models can be well approximated by reduced order LTI models. For the Lees-Edwards SLLOD type model nonlinear ODEs will be approximated by a Linear Time Varying (LTV) model about some nominal trajectory and both balanced truncation and Proper Orthogonal Decomposition (POD) will be used to assess the plausibility of reduced order models to this system description. An immediate application of the derived LTV models is Quasilinearization or Waveform Relaxation. Quasilinearization is a Newton's method applied to the ODE operator

  20. GIS-based approach for optimal siting and sizing of renewables considering techno-environmental constraints and the stochastic nature of meteorological inputs

    Science.gov (United States)

    Daskalou, Olympia; Karanastasi, Maria; Markonis, Yannis; Dimitriadis, Panayiotis; Koukouvinos, Antonis; Efstratiadis, Andreas; Koutsoyiannis, Demetris

    2016-04-01

    Following the legislative EU targets and taking advantage of its high renewable energy potential, Greece can obtain significant benefits from developing its water, solar and wind energy resources. In this context we present a GIS-based methodology for the optimal sizing and siting of solar and wind energy systems at the regional scale, which is tested in the Prefecture of Thessaly. First, we assess the wind and solar potential, taking into account the stochastic nature of the associated meteorological processes (i.e. wind speed and solar radiation, respectively), which is essential component for both planning (i.e., type selection and sizing of photovoltaic panels and wind turbines) and management purposes (i.e., real-time operation of the system). For the optimal siting, we assess the efficiency and economic performance of the energy system, also accounting for a number of constraints, associated with topographic limitations (e.g., terrain slope, proximity to road and electricity grid network, etc.), the environmental legislation and other land use constraints. Based on this analysis, we investigate favorable alternatives using technical, environmental as well as financial criteria. The final outcome is GIS maps that depict the available energy potential and the optimal layout for photovoltaic panels and wind turbines over the study area. We also consider a hypothetical scenario of future development of the study area, in which we assume the combined operation of the above renewables with major hydroelectric dams and pumped-storage facilities, thus providing a unique hybrid renewable system, extended at the regional scale.