Optimal time points sampling in pathway modelling.
Hu, Shiyan
2004-01-01
Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.
Free-time and fixed end-point multi-target optimal control theory: Application to quantum computing
International Nuclear Information System (INIS)
Mishima, K.; Yamashita, K.
2011-01-01
Graphical abstract: The two-state Deutsch-Jozsa algortihm used to demonstrate the utility of free-time and fixed-end point multi-target optimal control theory. Research highlights: → Free-time and fixed-end point multi-target optimal control theory (FRFP-MTOCT) was constructed. → The features of our theory include optimization of the external time-dependent perturbations with high transition probabilities, that of the temporal duration, the monotonic convergence, and the ability to optimize multiple-laser pulses simultaneously. → The advantage of the theory and a comparison with conventional fixed-time and fixed end-point multi-target optimal control theory (FIFP-MTOCT) are presented by comparing data calculated using the present theory with those published previously [K. Mishima, K. Yamashita, Chem. Phys. 361 (2009) 106]. → The qubit system of our interest consists of two polar NaCl molecules coupled by dipole-dipole interaction. → The calculation examples show that our theory is useful for minor adjustment of the external fields. - Abstract: An extension of free-time and fixed end-point optimal control theory (FRFP-OCT) to monotonically convergent free-time and fixed end-point multi-target optimal control theory (FRFP-MTOCT) is presented. The features of our theory include optimization of the external time-dependent perturbations with high transition probabilities, that of the temporal duration, the monotonic convergence, and the ability to optimize multiple-laser pulses simultaneously. The advantage of the theory and a comparison with conventional fixed-time and fixed end-point multi-target optimal control theory (FIFP-MTOCT) are presented by comparing data calculated using the present theory with those published previously [K. Mishima, K. Yamashita, Chem. Phys. 361, (2009), 106]. The qubit system of our interest consists of two polar NaCl molecules coupled by dipole-dipole interaction. The calculation examples show that our theory is useful for minor
Optimal Power Flow by Interior Point and Non Interior Point Modern Optimization Algorithms
Directory of Open Access Journals (Sweden)
Marcin Połomski
2013-03-01
Full Text Available The idea of optimal power flow (OPF is to determine the optimal settings for control variables while respecting various constraints, and in general it is related to power system operational and planning optimization problems. A vast number of optimization methods have been applied to solve the OPF problem, but their performance is highly dependent on the size of a power system being optimized. The development of the OPF recently has tracked significant progress both in numerical optimization techniques and computer techniques application. In recent years, application of interior point methods to solve OPF problem has been paid great attention. This is due to the fact that IP methods are among the fastest algorithms, well suited to solve large-scale nonlinear optimization problems. This paper presents the primal-dual interior point method based optimal power flow algorithm and new variant of the non interior point method algorithm with application to optimal power flow problem. Described algorithms were implemented in custom software. The experiments show the usefulness of computational software and implemented algorithms for solving the optimal power flow problem, including the system model sizes comparable to the size of the National Power System.
A Feedback Optimal Control Algorithm with Optimal Measurement Time Points
Directory of Open Access Journals (Sweden)
Felix Jost
2017-02-01
Full Text Available Nonlinear model predictive control has been established as a powerful methodology to provide feedback for dynamic processes over the last decades. In practice it is usually combined with parameter and state estimation techniques, which allows to cope with uncertainty on many levels. To reduce the uncertainty it has also been suggested to include optimal experimental design into the sequential process of estimation and control calculation. Most of the focus so far was on dual control approaches, i.e., on using the controls to simultaneously excite the system dynamics (learning as well as minimizing a given objective (performing. We propose a new algorithm, which sequentially solves robust optimal control, optimal experimental design, state and parameter estimation problems. Thus, we decouple the control and the experimental design problems. This has the advantages that we can analyze the impact of measurement timing (sampling independently, and is practically relevant for applications with either an ethical limitation on system excitation (e.g., chemotherapy treatment or the need for fast feedback. The algorithm shows promising results with a 36% reduction of parameter uncertainties for the Lotka-Volterra fishing benchmark example.
Grisey, A.; Yon, S.; Pechoux, T.; Letort, V.; Lafitte, P.
2017-03-01
Treatment time reduction is a key issue to expand the use of high intensity focused ultrasound (HIFU) surgery, especially for benign pathologies. This study aims at quantitatively assessing the potential reduction of the treatment time arising from moving the focal point during long pulses. In this context, the optimization of the focal point trajectory is crucial to achieve a uniform thermal dose repartition and avoid boiling. At first, a numerical optimization algorithm was used to generate efficient trajectories. Thermal conduction was simulated in 3D with a finite difference code and damages to the tissue were modeled using the thermal dose formula. Given an initial trajectory, the thermal dose field was first computed, then, making use of Pontryagin's maximum principle, the trajectory was iteratively refined. Several initial trajectories were tested. Then, an ex vivo study was conducted in order to validate the efficicency of the resulting optimized strategies. Single pulses were performed at 3MHz on fresh veal liver samples with an Echopulse and the size of each unitary lesion was assessed by cutting each sample along three orthogonal planes and measuring the dimension of the whitened area based on photographs. We propose a promising approach to significantly shorten HIFU treatment time: the numerical optimization algorithm was shown to provide a reliable insight on trajectories that can improve treatment strategies. The model must now be improved in order to take in vivo conditions into account and extensively validated.
Real time production optimization
Energy Technology Data Exchange (ETDEWEB)
Saputelli, Luigi; Otavio, Joao; Araujo, Turiassu; Escorcia, Alvaro [Halliburton, Houston, TX (United States). Landmark Division
2004-07-01
Production optimization encompasses various activities of measuring, analyzing, modeling, prioritizing and implementing actions to enhance productivity of a field. We present a state-of-the-art framework for optimizing production on a continuous basis as new sensor data is acquired in real time. Permanently acquired data is modeled and analyzed in order to create predictive models. A model based control strategy is used to regulate well and field instrumentation. The optimum field operating point, which changes with time, satisfies the maximum economic return. This work is a starting point for further development in automatic, intelligent reservoir technologies which get the most out of the abilities of permanent, instrumented wells and remotely activated downhole completions. The strategy, tested with history-matched data from a compartmentalised giant field, proved to reduce operating costs while increasing oil recovery by 27% in this field. (author)
Flow area optimization in point to area or area to point flows
International Nuclear Information System (INIS)
Ghodoossi, Lotfollah; Egrican, Niluefer
2003-01-01
This paper deals with the constructal theory of generation of shape and structure in flow systems connecting one point to a finite size area. The flow direction may be either from the point to the area or the area to the point. The formulation of the problem remains the same if the flow direction is reversed. Two models are used in optimization of the point to area or area to point flow problem: cost minimization and revenue maximization. The cost minimization model enables one to predict the shape of the optimized flow areas, but the geometric sizes of the flow areas are not predictable. That is, as an example, if the area of flow is a rectangle with a fixed area size, optimization of the point to area or area to point flow problem by using the cost minimization model will only predict the height/length ratio of the rectangle not the height and length itself. By using the revenue maximization model in optimization of the flow problems, all optimized geometric aspects of the interested flow areas will be derived as well. The aim of this paper is to optimize the point to area or area to point flow problems in various elemental flow area shapes and various structures of the flow system (various combinations of elemental flow areas) by using the revenue maximization model. The elemental flow area shapes used in this paper are either rectangular or triangular. The forms of the flow area structure, made up of an assembly of optimized elemental flow areas to obtain bigger flow areas, are rectangle-in-rectangle, rectangle-in-triangle, triangle-in-triangle and triangle-in-rectangle. The global maximum revenue, revenue collected per unit flow area and the shape and sizes of each flow area structure have been derived in optimized conditions. The results for each flow area structure have been compared with the results of the other structures to determine the structure that provides better performance. The conclusion is that the rectangle-in-triangle flow area structure
Knee point search using cascading top-k sorting with minimized time complexity.
Wang, Zheng; Tseng, Shian-Shyong
2013-01-01
Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.
Optimal Set-Point Synthesis in HVAC Systems
DEFF Research Database (Denmark)
Komareji, Mohammad; Stoustrup, Jakob; Rasmussen, Henrik
2007-01-01
This paper presents optimal set-point synthesis for a heating, ventilating, and air-conditioning (HVAC) system. This HVAC system is made of two heat exchangers: an air-to-air heat exchanger and a water-to-air heat exchanger. The objective function is composed of the electrical power for different...... components, encompassing fans, primary/secondary pump, tertiary pump, and air-to-air heat exchanger wheel; and a fraction of thermal power used by the HVAC system. The goals that have to be achieved by the HVAC system appear as constraints in the optimization problem. To solve the optimization problem......, a steady state model of the HVAC system is derived while different supplying hydronic circuits are studied for the water-to-air heat exchanger. Finally, the optimal set-points and the optimal supplying hydronic circuit are resulted....
Optimal External-Memory Planar Point Enclosure
DEFF Research Database (Denmark)
Arge, Lars; Samoladas, Vasilis; Yi, Ke
2007-01-01
.g. spatial and temporal databases, and is dual to the important and well-studied orthogonal range searching problem. Surprisingly, despite the fact that the problem can be solved optimally in internal memory with linear space and O(log N+K) query time, we show that one cannot construct a linear sized......In this paper we study the external memory planar point enclosure problem: Given N axis-parallel rectangles in the plane, construct a data structure on disk (an index) such that all K rectangles containing a query point can be reported I/O-efficiently. This problem has important applications in e...... external memory point enclosure data structure that can be used to answer a query in O(log B N+K/B) I/Os, where B is the disk block size. To obtain this bound, Ω(N/B 1−ε ) disk blocks are needed for some constant ε>0. With linear space, the best obtainable query bound is O(log 2 N+K/B) if a linear output...
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Komputasi Aliran Daya Optimal Sistem Tenaga Skala Besar dengan Metode Primal Dual Interior Point
Directory of Open Access Journals (Sweden)
Syafii Syafii
2016-03-01
Full Text Available This paper focuses on the use of Primal Dual Interior Point method in the analysis of optimal power flow. Optimal power flow analysis with Primal Dual Interior Point method then compared with Linear Programming Method using Matpower program. The simulation results show that the computation results of Primal Dual Interior Point similar with Linear Programming Method for total cost of generation and large power generated by each power plant. But in terms of computation time Primal Dual Interior Point method is faster than the method of Linear Programming, especially for large systems. Primal Dual Interior Point method have solved the problem in 40.59 seconds, while Linear Programming method takes longer 239.72 seconds for large-scale system 9241 bus. This is because the settlement PDIP algorithm starts from the starting point x0, which is located within the area of feasible move towards the optimal point, in contrast to the simplex method that moves along the border of the feasible from one extreme point to the other extreme point. Thus Primal Dual Interior Point method have more efficient in solving optimal power flow problem of large-scale power systems.
Energy Technology Data Exchange (ETDEWEB)
Jurencak, Tomas; Turek, Jakub; Nijssen, Estelle C. [Maastricht University Medical Center, Department of Radiology, P. Debyelaan 25, P.O. Box 5800, AZ, Maastricht (Netherlands); Kietselaer, Bastiaan L.J.H. [Maastricht University Medical Center, Department of Radiology, P. Debyelaan 25, P.O. Box 5800, AZ, Maastricht (Netherlands); Maastricht University Medical Center, CARIM School for Cardiovascular Diseases, Maastricht (Netherlands); Maastricht University Medical Center, Department of Cardiology, Maastricht (Netherlands); Mihl, Casper; Kok, Madeleine; Wildberger, Joachim E.; Das, Marco [Maastricht University Medical Center, Department of Radiology, P. Debyelaan 25, P.O. Box 5800, AZ, Maastricht (Netherlands); Maastricht University Medical Center, CARIM School for Cardiovascular Diseases, Maastricht (Netherlands); Ommen, Vincent G.V.A. van [Maastricht University Medical Center, Department of Cardiology, Maastricht (Netherlands); Garsse, Leen A.F.M. van [Maastricht University Medical Center, Department of Cardiothoracic Surgery, Maastricht (Netherlands)
2015-07-15
To determine the optimal imaging time point for transcatheter aortic valve implantation (TAVI) therapy planning by comprehensive evaluation of the aortic root. Multidetector-row CT (MDCT) examination with retrospective ECG gating was retrospectively performed in 64 consecutive patients referred for pre-TAVI assessment. Eighteen different parameters of the aortic root were evaluated at 11 different time points in the cardiac cycle. Time points at which maximal (or minimal) sizes were determined, and dimension differences to other time points were evaluated. Theoretical prosthesis sizing based on different measurements was compared. Largest dimensions were found between 10 and 20 % of the cardiac cycle for annular short diameter (10 %); mean diameter (10 %); effective diameter and circumference-derived diameter (20 %); distance from the annulus to right coronary artery ostium (10 %); aortic root at the left coronary artery level (20 %); aortic root at the widest portion of coronary sinuses (20 %); and right leaflet length (20 %). Prosthesis size selection differed depending on the chosen measurements in 25-75 % of cases. Significant changes in anatomical structures of the aortic root during the cardiac cycle are crucial for TAVI planning. Imaging in systole is mandatory to obtain maximal dimensions. (orig.)
Directory of Open Access Journals (Sweden)
Thi Rein Myo
2008-11-01
Full Text Available Optimal point-to-point trajectory planning for planar redundant manipulator is considered in this study. The main objective is to minimize the sum of the position error of the end-effector at each intermediate point along the trajectory so that the end-effector can track the prescribed trajectory accurately. An algorithm combining Genetic Algorithm and Pattern Search as a Generalized Pattern Search GPS is introduced to design the optimal trajectory. To verify the proposed algorithm, simulations for a 3-D-O-F planar manipulator with different end-effector trajectories have been carried out. A comparison between the Genetic Algorithm and the Generalized Pattern Search shows that The GPS gives excellent tracking performance.
Point charges optimally placed to represent the multipole expansion of charge distributions.
Directory of Open Access Journals (Sweden)
Ramu Anandakrishnan
Full Text Available We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance 2x the extent of the charge distribution--the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom, is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å is half that of the point multipole expansion up to the octupole
Optimal Time-Abstract Schedulers for CTMDPs and Markov Games
Directory of Open Access Journals (Sweden)
Markus Rabe
2010-06-01
Full Text Available We study time-bounded reachability in continuous-time Markov decision processes for time-abstract scheduler classes. Such reachability problems play a paramount role in dependability analysis and the modelling of manufacturing and queueing systems. Consequently, their analysis has been studied intensively, and techniques for the approximation of optimal control are well understood. From a mathematical point of view, however, the question of approximation is secondary compared to the fundamental question whether or not optimal control exists. We demonstrate the existence of optimal schedulers for the time-abstract scheduler classes for all CTMDPs. Our proof is constructive: We show how to compute optimal time-abstract strategies with finite memory. It turns out that these optimal schedulers have an amazingly simple structure---they converge to an easy-to-compute memoryless scheduling policy after a finite number of steps. Finally, we show that our argument can easily be lifted to Markov games: We show that both players have a likewise simple optimal strategy in these more general structures.
Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem
Omagari, Hiroki; Higashino, Shin-Ichiro
2018-04-01
In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.
Li, Jinna; Kiumarsi, Bahare; Chai, Tianyou; Lewis, Frank L; Fan, Jialu
2017-12-01
Industrial flow lines are composed of unit processes operating on a fast time scale and performance measurements known as operational indices measured at a slower time scale. This paper presents a model-free optimal solution to a class of two time-scale industrial processes using off-policy reinforcement learning (RL). First, the lower-layer unit process control loop with a fast sampling period and the upper-layer operational index dynamics at a slow time scale are modeled. Second, a general optimal operational control problem is formulated to optimally prescribe the set-points for the unit industrial process. Then, a zero-sum game off-policy RL algorithm is developed to find the optimal set-points by using data measured in real-time. Finally, a simulation experiment is employed for an industrial flotation process to show the effectiveness of the proposed method.
Optimizing Probability of Detection Point Estimate Demonstration
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
Interior point algorithms: guaranteed optimality for fluence map optimization in IMRT
Energy Technology Data Exchange (ETDEWEB)
Aleman, Dionne M [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, ON M5S 3G8 (Canada); Glaser, Daniel [Division of Optimization and Systems Theory, Department of Mathematics, Royal Institute of Technology, Stockholm (Sweden); Romeijn, H Edwin [Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109-2117 (United States); Dempsey, James F, E-mail: aleman@mie.utoronto.c, E-mail: romeijn@umich.ed, E-mail: jfdempsey@viewray.co [ViewRay, Inc. 2 Thermo Fisher Way, Village of Oakwood, OH 44146 (United States)
2010-09-21
One of the most widely studied problems of the intensity-modulated radiation therapy (IMRT) treatment planning problem is the fluence map optimization (FMO) problem, the problem of determining the amount of radiation intensity, or fluence, of each beamlet in each beam. For a given set of beams, the fluences of the beamlets can drastically affect the quality of the treatment plan, and thus it is critical to obtain good fluence maps for radiation delivery. Although several approaches have been shown to yield good solutions to the FMO problem, these solutions are not guaranteed to be optimal. This shortcoming can be attributed to either optimization model complexity or properties of the algorithms used to solve the optimization model. We present a convex FMO formulation and an interior point algorithm that yields an optimal treatment plan in seconds, making it a viable option for clinical applications.
Interior point algorithms: guaranteed optimality for fluence map optimization in IMRT
International Nuclear Information System (INIS)
Aleman, Dionne M; Glaser, Daniel; Romeijn, H Edwin; Dempsey, James F
2010-01-01
One of the most widely studied problems of the intensity-modulated radiation therapy (IMRT) treatment planning problem is the fluence map optimization (FMO) problem, the problem of determining the amount of radiation intensity, or fluence, of each beamlet in each beam. For a given set of beams, the fluences of the beamlets can drastically affect the quality of the treatment plan, and thus it is critical to obtain good fluence maps for radiation delivery. Although several approaches have been shown to yield good solutions to the FMO problem, these solutions are not guaranteed to be optimal. This shortcoming can be attributed to either optimization model complexity or properties of the algorithms used to solve the optimization model. We present a convex FMO formulation and an interior point algorithm that yields an optimal treatment plan in seconds, making it a viable option for clinical applications.
Near constant-time optimal piecewise LDR to HDR inverse tone mapping
Chen, Qian; Su, Guan-Ming; Yin, Peng
2015-02-01
In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.
Optimizing the diagnostic power with gastric emptying scintigraphy at multiple time points
Directory of Open Access Journals (Sweden)
Gajewski Byron J
2011-05-01
Full Text Available Abstract Background Gastric Emptying Scintigraphy (GES at intervals over 4 hours after a standardized radio-labeled meal is commonly regarded as the gold standard for diagnosing gastroparesis. The objectives of this study were: 1 to investigate the best time point and the best combination of multiple time points for diagnosing gastroparesis with repeated GES measures, and 2 to contrast and cross-validate Fisher's Linear Discriminant Analysis (LDA, a rank based Distribution Free (DF approach, and the Classification And Regression Tree (CART model. Methods A total of 320 patients with GES measures at 1, 2, 3, and 4 hour (h after a standard meal using a standardized method were retrospectively collected. Area under the Receiver Operating Characteristic (ROC curve and the rate of false classification through jackknife cross-validation were used for model comparison. Results Due to strong correlation and an abnormality in data distribution, no substantial improvement in diagnostic power was found with the best linear combination by LDA approach even with data transformation. With DF method, the linear combination of 4-h and 3-h increased the Area Under the Curve (AUC and decreased the number of false classifications (0.87; 15.0% over individual time points (0.83, 0.82; 15.6%, 25.3%, for 4-h and 3-h, respectively at a higher sensitivity level (sensitivity = 0.9. The CART model using 4 hourly GES measurements along with patient's age was the most accurate diagnostic tool (AUC = 0.88, false classification = 13.8%. Patients having a 4-h gastric retention value >10% were 5 times more likely to have gastroparesis (179/207 = 86.5% than those with ≤10% (18/113 = 15.9%. Conclusions With a mixed group of patients either referred with suspected gastroparesis or investigated for other reasons, the CART model is more robust than the LDA and DF approaches, capable of accommodating covariate effects and can be generalized for cross institutional applications, but
Time-Space Topology Optimization
DEFF Research Database (Denmark)
Jensen, Jakob Søndergaard
2008-01-01
A method for space-time topology optimization is outlined. The space-time optimization strategy produces structures with optimized material distributions that vary in space and in time. The method is demonstrated for one-dimensional wave propagation in an elastic bar that has a time-dependent Young......’s modulus and is subjected to a transient load. In the example an optimized dynamic structure is demonstrated that compresses a propagating Gauss pulse....
Optimal adaptive control for quantum metrology with time-dependent Hamiltonians
Pang, Shengshi; Jordan, Andrew N.
2017-01-01
Quantum metrology has been studied for a wide range of systems with time-independent Hamiltonians. For systems with time-dependent Hamiltonians, however, due to the complexity of dynamics, little has been known about quantum metrology. Here we investigate quantum metrology with time-dependent Hamiltonians to bridge this gap. We obtain the optimal quantum Fisher information for parameters in time-dependent Hamiltonians, and show proper Hamiltonian control is generally necessary to optimize the Fisher information. We derive the optimal Hamiltonian control, which is generally adaptive, and the measurement scheme to attain the optimal Fisher information. In a minimal example of a qubit in a rotating magnetic field, we find a surprising result that the fundamental limit of T2 time scaling of quantum Fisher information can be broken with time-dependent Hamiltonians, which reaches T4 in estimating the rotation frequency of the field. We conclude by considering level crossings in the derivatives of the Hamiltonians, and point out additional control is necessary for that case. PMID:28276428
Optimal adaptive control for quantum metrology with time-dependent Hamiltonians.
Pang, Shengshi; Jordan, Andrew N
2017-03-09
Quantum metrology has been studied for a wide range of systems with time-independent Hamiltonians. For systems with time-dependent Hamiltonians, however, due to the complexity of dynamics, little has been known about quantum metrology. Here we investigate quantum metrology with time-dependent Hamiltonians to bridge this gap. We obtain the optimal quantum Fisher information for parameters in time-dependent Hamiltonians, and show proper Hamiltonian control is generally necessary to optimize the Fisher information. We derive the optimal Hamiltonian control, which is generally adaptive, and the measurement scheme to attain the optimal Fisher information. In a minimal example of a qubit in a rotating magnetic field, we find a surprising result that the fundamental limit of T 2 time scaling of quantum Fisher information can be broken with time-dependent Hamiltonians, which reaches T 4 in estimating the rotation frequency of the field. We conclude by considering level crossings in the derivatives of the Hamiltonians, and point out additional control is necessary for that case.
International Nuclear Information System (INIS)
Alavi, Seyed Arash; Ahmadian, Ali; Aliakbar-Golkar, Masoud
2015-01-01
Highlights: • Energy management is necessary in the active distribution network to reduce operation costs. • Uncertainty modeling is essential in energy management studies in active distribution networks. • Point estimate method is a suitable method for uncertainty modeling due to its lower computation time and acceptable accuracy. • In the absence of Probability Distribution Function (PDF) robust optimization has a good ability for uncertainty modeling. - Abstract: Uncertainty can be defined as the probability of difference between the forecasted value and the real value. As this probability is small, the operation cost of the power system will be less. This purpose necessitates modeling of system random variables (such as the output power of renewable resources and the load demand) with appropriate and practicable methods. In this paper, an adequate procedure is proposed in order to do an optimal energy management on a typical micro-grid with regard to the relevant uncertainties. The point estimate method is applied for modeling the wind power and solar power uncertainties, and robust optimization technique is utilized to model load demand uncertainty. Finally, a comparison is done between deterministic and probabilistic management in different scenarios and their results are analyzed and evaluated
Optimization of thermal systems based on finite-time thermodynamics and thermoeconomics
Energy Technology Data Exchange (ETDEWEB)
Durmayaz, A. [Istanbul Technical University (Turkey). Department of Mechanical Engineering; Sogut, O.S. [Istanbul Technical University, Maslak (Turkey). Department of Naval Architecture and Ocean Engineering; Sahin, B. [Yildiz Technical University, Besiktas, Istanbul (Turkey). Department of Naval Architecture; Yavuz, H. [Istanbul Technical University, Maslak (Turkey). Institute of Energy
2004-07-01
The irreversibilities originating from finite-time and finite-size constraints are important in the real thermal system optimization. Since classical thermodynamic analysis based on thermodynamic equilibrium do not consider these constraints directly, it is necessary to consider the energy transfer between the system and its surroundings in the rate form. Finite-time thermodynamics provides a fundamental starting point for the optimization of real thermal systems including the fundamental concepts of heat transfer and fluid mechanics to classical thermodynamics. In this study, optimization studies of thermal systems, that consider various objective functions, based on finite-time thermodynamics and thermoeconomics are reviewed. (author)
A superlinear interior points algorithm for engineering design optimization
Herskovits, J.; Asquier, J.
1990-01-01
We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.
Primal Interior-Point Method for Large Sparse Minimax Optimization
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2009-01-01
Roč. 45, č. 5 (2009), s. 841-864 ISSN 0023-5954 R&D Projects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : unconstrained optimization * large-scale optimization * minimax optimization * nonsmooth optimization * interior-point methods * modified Newton methods * variable metric methods * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.445, year: 2009 http://dml.cz/handle/10338.dmlcz/140034
Grey-Theory-Based Optimization Model of Emergency Logistics Considering Time Uncertainty.
Qiu, Bao-Jian; Zhang, Jiang-Hua; Qi, Yuan-Tao; Liu, Yang
2015-01-01
Natural disasters occur frequently in recent years, causing huge casualties and property losses. Nowadays, people pay more and more attention to the emergency logistics problems. This paper studies the emergency logistics problem with multi-center, multi-commodity, and single-affected-point. Considering that the path near the disaster point may be damaged, the information of the state of the paths is not complete, and the travel time is uncertainty, we establish the nonlinear programming model that objective function is the maximization of time-satisfaction degree. To overcome these drawbacks: the incomplete information and uncertain time, this paper firstly evaluates the multiple roads of transportation network based on grey theory and selects the reliable and optimal path. Then simplify the original model under the scenario that the vehicle only follows the optimal path from the emergency logistics center to the affected point, and use Lingo software to solve it. The numerical experiments are presented to show the feasibility and effectiveness of the proposed method.
Blue-noise remeshing with farthest point optimization
Yan, Dongming
2014-08-01
In this paper, we present a novel method for surface sampling and remeshing with good blue-noise properties. Our approach is based on the farthest point optimization (FPO), a relaxation technique that generates high quality blue-noise point sets in 2D. We propose two important generalizations of the original FPO framework: adaptive sampling and sampling on surfaces. A simple and efficient algorithm for accelerating the FPO framework is also proposed. Experimental results show that the generalized FPO generates point sets with excellent blue-noise properties for adaptive and surface sampling. Furthermore, we demonstrate that our remeshing quality is superior to the current state-of-the art approaches. © 2014 The Eurographics Association and John Wiley & Sons Ltd.
Blue-noise remeshing with farthest point optimization
Yan, Dongming; Guo, Jianwei; Jia, Xiaohong; Zhang, Xiaopeng; Wonka, Peter
2014-01-01
In this paper, we present a novel method for surface sampling and remeshing with good blue-noise properties. Our approach is based on the farthest point optimization (FPO), a relaxation technique that generates high quality blue-noise point sets in 2D. We propose two important generalizations of the original FPO framework: adaptive sampling and sampling on surfaces. A simple and efficient algorithm for accelerating the FPO framework is also proposed. Experimental results show that the generalized FPO generates point sets with excellent blue-noise properties for adaptive and surface sampling. Furthermore, we demonstrate that our remeshing quality is superior to the current state-of-the art approaches. © 2014 The Eurographics Association and John Wiley & Sons Ltd.
Nonlinear Burn Control and Operating Point Optimization in ITER
Boyer, Mark; Schuster, Eugenio
2013-10-01
Control of the fusion power through regulation of the plasma density and temperature will be essential for achieving and maintaining desired operating points in fusion reactors and burning plasma experiments like ITER. In this work, a volume averaged model for the evolution of the density of energy, deuterium and tritium fuel ions, alpha-particles, and impurity ions is used to synthesize a multi-input multi-output nonlinear feedback controller for stabilizing and modulating the burn condition. Adaptive control techniques are used to account for uncertainty in model parameters, including particle confinement times and recycling rates. The control approach makes use of the different possible methods for altering the fusion power, including adjusting the temperature through auxiliary heating, modulating the density and isotopic mix through fueling, and altering the impurity density through impurity injection. Furthermore, a model-based optimization scheme is proposed to drive the system as close as possible to desired fusion power and temperature references. Constraints are considered in the optimization scheme to ensure that, for example, density and beta limits are avoided, and that optimal operation is achieved even when actuators reach saturation. Supported by the NSF CAREER award program (ECCS-0645086).
Liu, Wanli
2017-03-08
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.
Sequential Change-Point Detection via Online Convex Optimization
Directory of Open Access Journals (Sweden)
Yang Cao
2018-02-01
Full Text Available Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackling complex situations where recursive maximum likelihood estimators cannot be found. When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory.
Time-optimal thermalization of single-mode Gaussian states
Carlini, Alberto; Mari, Andrea; Giovannetti, Vittorio
2014-11-01
We consider the problem of time-optimal control of a continuous bosonic quantum system subject to the action of a Markovian dissipation. In particular, we consider the case of a one-mode Gaussian quantum system prepared in an arbitrary initial state and which relaxes to the steady state due to the action of the dissipative channel. We assume that the unitary part of the dynamics is represented by Gaussian operations which preserve the Gaussian nature of the quantum state, i.e., arbitrary phase rotations, bounded squeezing, and unlimited displacements. In the ideal ansatz of unconstrained quantum control (i.e., when the unitary phase rotations, squeezing, and displacement of the mode can be performed instantaneously), we study how control can be optimized for speeding up the relaxation towards the fixed point of the dynamics and we analytically derive the optimal relaxation time. Our model has potential and interesting applications to the control of modes of electromagnetic radiation and of trapped levitated nanospheres.
Directory of Open Access Journals (Sweden)
Po-Yu Chen
2017-01-01
Full Text Available Although the safe consumption of goods such as food products, medicine, and vaccines is related to their freshness, consumers frequently understand less than suppliers about the freshness of goods when they purchase them. Because of this lack of information, apart from sales prices, consumers refer only to the manufacturing and expiration dates when deciding whether to purchase and how many of these goods to buy. If dealers could determine the sales price at each point in time and customers’ intention to buy goods of varying freshness, then dealers could set an optimal inventory cycle and allocate a weekly sales price for each point in time, thereby maximizing the profit per unit time. Therefore, in this study, an economic order quality model was established to enable discussion of the optimal control of sales prices. The technique for identifying the optimal solution for the model was determined, the characteristics of the optimal solution were demonstrated, and the implications of the solution’s sensitivity analysis were explained.
Dynamic Planar Convex Hull with Optimal Query Time and O(log n · log log n ) Update Time
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Jakob, Riko
2000-01-01
The dynamic maintenance of the convex hull of a set of points in the plane is one of the most important problems in computational geometry. We present a data structure supporting point insertions in amortized O(log n · log log log n) time, point deletions in amortized O(log n · log log n) time......, and various queries about the convex hull in optimal O(log n) worst-case time. The data structure requires O(n) space. Applications of the new dynamic convex hull data structure are improved deterministic algorithms for the k-level problem and the red-blue segment intersection problem where all red and all...
Power-limited low-thrust trajectory optimization with operation point detection
Chi, Zhemin; Li, Haiyang; Jiang, Fanghua; Li, Junfeng
2018-06-01
The power-limited solar electric propulsion system is considered more practical in mission design. An accurate mathematical model of the propulsion system, based on experimental data of the power generation system, is used in this paper. An indirect method is used to deal with the time-optimal and fuel-optimal control problems, in which the solar electric propulsion system is described using a finite number of operation points, which are characterized by different pairs of thruster input power. In order to guarantee the integral accuracy for the discrete power-limited problem, a power operation detection technique is embedded in the fourth-order Runge-Kutta algorithm with fixed step. Moreover, the logarithmic homotopy method and normalization technique are employed to overcome the difficulties caused by using indirect methods. Three numerical simulations with actual propulsion systems are given to substantiate the feasibility and efficiency of the proposed method.
Chaidee, S.; Pakawanwong, P.; Suppakitpaisarn, V.; Teerasawat, P.
2017-09-01
In this work, we devise an efficient method for the land-use optimization problem based on Laguerre Voronoi diagram. Previous Voronoi diagram-based methods are more efficient and more suitable for interactive design than discrete optimization-based method, but, in many cases, their outputs do not satisfy area constraints. To cope with the problem, we propose a force-directed graph drawing algorithm, which automatically allocates generating points of Voronoi diagram to appropriate positions. Then, we construct a Laguerre Voronoi diagram based on these generating points, use linear programs to adjust each cell, and reconstruct the diagram based on the adjustment. We adopt the proposed method to the practical case study of Chiang Mai University's allocated land for a mixed-use complex. For this case study, compared to other Voronoi diagram-based method, we decrease the land allocation error by 62.557 %. Although our computation time is larger than the previous Voronoi-diagram-based method, it is still suitable for interactive design.
A novel hybrid particle swarm optimization for economic dispatch with valve-point loading effects
Energy Technology Data Exchange (ETDEWEB)
Niknam, Taher, E-mail: niknam@sutech.ac.i [Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz, P.O. 71555-313 (Iran, Islamic Republic of); Mojarrad, Hasan Doagou, E-mail: hasan_doagou@yahoo.co [Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz, P.O. 71555-313 (Iran, Islamic Republic of); Meymand, Hamed Zeinoddini, E-mail: h.zeinaddini@gmail.co [Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz, P.O. 71555-313 (Iran, Islamic Republic of)
2011-04-15
Economic dispatch (ED) is one of the important problems in the operation and management of the electric power systems which is formulated as an optimization problem. Modern heuristics stochastic optimization techniques appear to be efficient in solving ED problem without any restriction because of their ability to seek the global optimal solution. One of modern heuristic algorithms is particle swarm optimization (PSO). In PSO algorithm, particles change place to get close to the best position and find the global minimum point. Also, differential evolution (DE) is a robust statistical method for solving non-linear and non-convex optimization problem. The fast convergence of DE degrades its performance and reduces its search capability that leads to a higher probability towards obtaining a local optimum. In order to overcome this drawback a hybrid method is presented to solve the ED problem with valve-point loading effect by integrating the variable DE with the fuzzy adaptive PSO called FAPSO-VDE. DE is the main optimizer and the PSO is used to maintain the population diversity and prevent leading to misleading local optima for every improvement in the solution of the DE run. The parameters of proposed hybrid algorithm such as inertia weight, mutation and crossover factors are adaptively adjusted. The feasibility and effectiveness of the proposed hybrid algorithm is demonstrated for two case studies and results are compared with those of other methods. It is shown that FAPSO-VDE has high quality solution, superior convergence characteristics and shorter computation time.
Bilevel Optimization for Scene Segmentation of LiDAR Point Cloud
Directory of Open Access Journals (Sweden)
LI Minglei
2018-02-01
Full Text Available The segmentation of point clouds obtained by light detection and ranging (LiDAR systems is a critical step for many tasks,such as data organization,reconstruction and information extraction.In this paper,we propose a bilevel progressive optimization algorithm based on the local differentiability.First,we define the topological relation and distance metric of points in the framework of Riemannian geometry,and in the point-based level using k-means method generates over-segmentation results,e.g.super voxels.Then these voxels are formulated as nodes which consist a minimal spanning tree.High level features are extracted from voxel structures,and a graph-based optimization method is designed to yield the final adaptive segmentation results.The implementation experiments on real data demonstrate that our method is efficient and superior to state-of-the-art methods.
Photovoltaic System with Smart Tracking of the Optimal Working Point
Directory of Open Access Journals (Sweden)
PATARAU, T.
2010-08-01
Full Text Available A photovoltaic (PV system, based on a Maximum Power Point Tracking (MPPT controller that extracts the maximum possible output power from the solar panel is described. Output efficiency of a PV energy system can be achieved only if the system working point is brought near the maximum power point (MPP. The proposed system, making use of several MPPT control algorithms (Perturb and Observe, Incremental conductance, Fuzzy Logic, demonstrates in simulations as well as in real experiments good tracking of the optimal working point.
A primal-dual interior point method for large-scale free material optimization
DEFF Research Database (Denmark)
Weldeyesus, Alemseged Gebrehiwot; Stolpe, Mathias
2015-01-01
Free Material Optimization (FMO) is a branch of structural optimization in which the design variable is the elastic material tensor that is allowed to vary over the design domain. The requirements are that the material tensor is symmetric positive semidefinite with bounded trace. The resulting...... optimization problem is a nonlinear semidefinite program with many small matrix inequalities for which a special-purpose optimization method should be developed. The objective of this article is to propose an efficient primal-dual interior point method for FMO that can robustly and accurately solve large...... of iterations the interior point method requires is modest and increases only marginally with problem size. The computed optimal solutions obtain a higher precision than other available special-purpose methods for FMO. The efficiency and robustness of the method is demonstrated by numerical experiments on a set...
Definition of distance for nonlinear time series analysis of marked point process data
Energy Technology Data Exchange (ETDEWEB)
Iwayama, Koji, E-mail: koji@sat.t.u-tokyo.ac.jp [Research Institute for Food and Agriculture, Ryukoku Univeristy, 1-5 Yokotani, Seta Oe-cho, Otsu-Shi, Shiga 520-2194 (Japan); Hirata, Yoshito; Aihara, Kazuyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan)
2017-01-30
Marked point process data are time series of discrete events accompanied with some values, such as economic trades, earthquakes, and lightnings. A distance for marked point process data allows us to apply nonlinear time series analysis to such data. We propose a distance for marked point process data which can be calculated much faster than the existing distance when the number of marks is small. Furthermore, under some assumptions, the Kullback–Leibler divergences between posterior distributions for neighbors defined by this distance are small. We performed some numerical simulations showing that analysis based on the proposed distance is effective. - Highlights: • A new distance for marked point process data is proposed. • The distance can be computed fast enough for a small number of marks. • The method to optimize parameter values of the distance is also proposed. • Numerical simulations indicate that the analysis based on the distance is effective.
Engineering to Control Noise, Loading, and Optimal Operating Points
International Nuclear Information System (INIS)
Mitchell R. Swartz
2000-01-01
Successful engineering of low-energy nuclear systems requires control of noise, loading, and optimum operating point (OOP) manifolds. The latter result from the biphasic system response of low-energy nuclear reaction (LENR)/cold fusion systems, and their ash production rate, to input electrical power. Knowledge of the optimal operating point manifold can improve the reproducibility and efficacy of these systems in several ways. Improved control of noise, loading, and peak production rates is available through the study, and use, of OOP manifolds. Engineering of systems toward the OOP-manifold drive-point peak may, with inclusion of geometric factors, permit more accurate uniform determinations of the calibrated activity of these materials/systems
Energy Technology Data Exchange (ETDEWEB)
Cheng, Gang [Philadelphia VA Medical Center, Department of Radiology, Philadelphia, PA (United States); Hospital of the University of Pennsylvania, Department of Radiology, Philadelphia, PA (United States); Torigian, Drew A.; Alavi, Abass [Hospital of the University of Pennsylvania, Department of Radiology, Philadelphia, PA (United States); Zhuang, Hongming [Children' s Hospital of Philadelphia, Department of Radiology, Philadelphia, PA (United States)
2013-05-15
FDG PET and PET/CT are now widely used in oncological imaging for tumor characterization, staging, restaging, and response evaluation. However, numerous benign etiologies may cause increased FDG uptake indistinguishable from that of malignancy. Multiple studies have shown that dual time-point imaging (DTPI) of FDG PET may be helpful in differentiating malignancy from benign processes. However, exceptions exist, and some studies have demonstrated significant overlap of FDG uptake patterns between benign and malignant lesions on delayed time-point images. In this review, we summarize our experience and opinions on the value of DTPI and delayed time-point imaging in oncology, with a review of the relevant literature. We believe that the major value of DTPI and delayed time-point imaging is the increased sensitivity due to continued clearance of background activity and continued FDG accumulation in malignant lesions, if the same diagnostic criteria (as in the initial standard single time-point imaging) are used. The specificity of DTPI and delayed time-point imaging depends on multiple factors, including the prevalence of malignancies, the patient population, and the cut-off values (either SUV or retention index) used to define a malignancy. Thus, DTPI and delayed time-point imaging would be more useful if performed for evaluation of lesions in regions with significant background activity clearance over time (such as the liver, the spleen, the mediastinum), and if used in the evaluation of the extent of tumor involvement rather than in the characterization of the nature of any specific lesion. Acute infectious and non-infectious inflammatory lesions remain as the major culprit for diminished diagnostic performance of these approaches (especially in tuberculosis-endemic regions). Tumor heterogeneity may also contribute to inconsistent performance of DTPI. The authors believe that selective use of DTPI and delayed time-point imaging will improve diagnostic accuracy and
Energy Technology Data Exchange (ETDEWEB)
Ji, Aimin; Yin, Xu; Yuan, Minghai [Hohai University, Changzhou (China)
2015-09-15
There are two problems in Collaborative optimization (CO): (1) the local optima arising from the selection of an inappropriate initial point; (2) the low efficiency and accuracy root in inappropriate relaxation factors. To solve these problems, we first develop the Latin hypercube design (LHD) to determine an initial point of optimization, and then use the non-linear programming by quadratic Lagrangian (NLPQL) to search for the global solution. The effectiveness of the initial point selection strategy is verified by three benchmark functions with some dimensions and different complexities. Then we propose the Adaptive relaxation collaborative optimization (ARCO) algorithm to solve the inconsistency between the system level and the disciplines level, and in this method, the relaxation factors are determined according to the three separated stages of CO respectively. The performance of the ARCO algorithm is compared with the standard collaborative algorithm and the constant relaxation collaborative algorithm with a typical numerical example, which indicates that the ARCO algorithm is more efficient and accurate. Finally, we propose a Hybrid collaborative optimization (HCO) approach, which integrates the selection strategy of initial point with the ARCO algorithm. The results show that HCO can achieve the global optimal solution without the initial value and it also has advantages in convergence, accuracy and robustness. Therefore, the proposed HCO approach can solve the CO problems with applications in the spindle and the speed reducer.
International Nuclear Information System (INIS)
Ji, Aimin; Yin, Xu; Yuan, Minghai
2015-01-01
There are two problems in Collaborative optimization (CO): (1) the local optima arising from the selection of an inappropriate initial point; (2) the low efficiency and accuracy root in inappropriate relaxation factors. To solve these problems, we first develop the Latin hypercube design (LHD) to determine an initial point of optimization, and then use the non-linear programming by quadratic Lagrangian (NLPQL) to search for the global solution. The effectiveness of the initial point selection strategy is verified by three benchmark functions with some dimensions and different complexities. Then we propose the Adaptive relaxation collaborative optimization (ARCO) algorithm to solve the inconsistency between the system level and the disciplines level, and in this method, the relaxation factors are determined according to the three separated stages of CO respectively. The performance of the ARCO algorithm is compared with the standard collaborative algorithm and the constant relaxation collaborative algorithm with a typical numerical example, which indicates that the ARCO algorithm is more efficient and accurate. Finally, we propose a Hybrid collaborative optimization (HCO) approach, which integrates the selection strategy of initial point with the ARCO algorithm. The results show that HCO can achieve the global optimal solution without the initial value and it also has advantages in convergence, accuracy and robustness. Therefore, the proposed HCO approach can solve the CO problems with applications in the spindle and the speed reducer
International Nuclear Information System (INIS)
Huang, Yanjun; Khajepour, Amir; Ding, Haitao; Bagheri, Farshid; Bahrami, Majid
2017-01-01
Highlights: • A novel two-layer energy-saving controller for automotive A/C-R system is developed. • A set-point optimizer at the outer loop is designed based on the steady state model. • A sliding mode controller in the inner loop is built. • Extensively experiments studies show that about 9% energy can be saving by this controller. - Abstract: This paper presents an energy-saving controller for automotive air-conditioning/refrigeration (A/C-R) systems. With their extensive application in homes, industry, and vehicles, A/C-R systems are consuming considerable amounts of energy. The proposed controller consists of two different time-scale layers. The outer or the slow time-scale layer called a set-point optimizer is used to find the set points related to energy efficiency by using the steady state model; whereas, the inner or the fast time-scale layer is used to track the obtained set points. In the inner loop, thanks to its robustness, a sliding mode controller (SMC) is utilized to track the set point of the cargo temperature. The currently used on/off controller is presented and employed as a basis for comparison to the proposed controller. More importantly, the real experimental results under several disturbed scenarios are analysed to demonstrate how the proposed controller can improve performance while reducing the energy consumption by 9% comparing with the on/off controller. The controller is suitable for any type of A/C-R system even though it is applied to an automotive A/C-R system in this paper.
Point-based warping with optimized weighting factors of displacement vectors
Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas
2000-06-01
The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.
Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve
Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.
2009-04-01
Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.
DEFF Research Database (Denmark)
Houshmand, Sina; Salavati, Ali; Antonsen Segtnan, Eivind
2016-01-01
The techniques of dual-time-point imaging (DTPI) and delayed-time-point imaging, which are mostly being used for distinction between inflammatory and malignant diseases, has increased the specificity of fluorodeoxyglucose (FDG)-PET for diagnosis and prognosis of certain diseases. A gradually incr...
A trust region interior point algorithm for optimal power flow problems
Energy Technology Data Exchange (ETDEWEB)
Wang Min [Hefei University of Technology (China). Dept. of Electrical Engineering and Automation; Liu Shengsong [Jiangsu Electric Power Dispatching and Telecommunication Company (China). Dept. of Automation
2005-05-01
This paper presents a new algorithm that uses the trust region interior point method to solve nonlinear optimal power flow (OPF) problems. The OPF problem is solved by a primal/dual interior point method with multiple centrality corrections as a sequence of linearized trust region sub-problems. It is the trust region that controls the linear step size and ensures the validity of the linear model. The convergence of the algorithm is improved through the modification of the trust region sub-problem. Numerical results of standard IEEE systems and two realistic networks ranging in size from 14 to 662 buses are presented. The computational results show that the proposed algorithm is very effective to optimal power flow applications, and favors the successive linear programming (SLP) method. Comparison with the predictor/corrector primal/dual interior point (PCPDIP) method is also made to demonstrate the superiority of the multiple centrality corrections technique. (author)
Grimm, Alexandra; Meyer, Heiko; Nickel, Marcel D; Nittka, Mathias; Raithel, Esther; Chaudry, Oliver; Friedberger, Andreas; Uder, Michael; Kemmler, Wolfgang; Quick, Harald H; Engelke, Klaus
2018-06-01
The purpose of this study is to evaluate and compare 2-point (2pt), 3-point (3pt), and 6-point (6pt) Dixon magnetic resonance imaging (MRI) sequences with flexible echo times (TE) to measure proton density fat fraction (PDFF) within muscles. Two subject groups were recruited (G1: 23 young and healthy men, 31 ± 6 years; G2: 50 elderly men, sarcopenic, 77 ± 5 years). A 3-T MRI system was used to perform Dixon imaging on the left thigh. PDFF was measured with six Dixon prototype sequences: 2pt, 3pt, and 6pt sequences once with optimal TEs (in- and opposed-phase echo times), lower resolution, and higher bandwidth (optTE sequences) and once with higher image resolution (highRes sequences) and shortest possible TE, respectively. Intra-fascia PDFF content was determined. To evaluate the comparability among the sequences, Bland-Altman analysis was performed. The highRes 6pt Dixon sequences served as reference as a high correlation of this sequence to magnetic resonance spectroscopy has been shown before. The PDFF difference between the highRes 6pt Dixon sequence and the optTE 6pt, both 3pt, and the optTE 2pt was low (between 2.2% and 4.4%), however, not to the highRes 2pt Dixon sequence (33%). For the optTE sequences, difference decreased with the number of echoes used. In conclusion, for Dixon sequences with more than two echoes, the fat fraction measurement was reliable with arbitrary echo times, while for 2pt Dixon sequences, it was reliable with dedicated in- and opposed-phase echo timing. Copyright © 2018 Elsevier B.V. All rights reserved.
Detecting changes in real-time data: a user's guide to optimal detection.
Johnson, P; Moriarty, J; Peskir, G
2017-08-13
The real-time detection of changes in a noisily observed signal is an important problem in applied science and engineering. The study of parametric optimal detection theory began in the 1930s, motivated by applications in production and defence. Today this theory, which aims to minimize a given measure of detection delay under accuracy constraints, finds applications in domains including radar, sonar, seismic activity, global positioning, psychological testing, quality control, communications and power systems engineering. This paper reviews developments in optimal detection theory and sequential analysis, including sequential hypothesis testing and change-point detection, in both Bayesian and classical (non-Bayesian) settings. For clarity of exposition, we work in discrete time and provide a brief discussion of the continuous time setting, including recent developments using stochastic calculus. Different measures of detection delay are presented, together with the corresponding optimal solutions. We emphasize the important role of the signal-to-noise ratio and discuss both the underlying assumptions and some typical applications for each formulation.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
Applications of an alternative formulation for one-layer real time optimization
Directory of Open Access Journals (Sweden)
Schiavon Júnior A.L.
2000-01-01
Full Text Available This paper presents two applications of an alternative formulation for one-layer real time structure for control and optimization. This new formulation have arisen from predictive controller QDMC (Quadratic Dynamic Matrix Control, a type of predictive control (Model Predictive Control - MPC. At each sampling time, the values of the outputs of process are fed into the optimization-control structure which supplies the new values of the manipulated variables already considering the best conditions of process. The variables of optimization are both set-point changes and control actions. The future stationary outputs and the future stationary control actions have both a different formulation of conventional one-layer structure and they are calculated from the inverse gain matrix of the process. This alternative formulation generates a convex problem, which can be solved by less sophisticated optimization algorithms. Linear and nonlinear economic objective functions were considered. The proposed approach was applied to two linear models, one SISO (single-input/single output and the other MIMO (multiple-input/multiple-output. The results showed an excellent performance.
Energy Technology Data Exchange (ETDEWEB)
Syafaruddin; Hiyama, Takashi [Department of Computer Science and Electrical Engineering of Kumamoto University, 2-39-1 Kurokami, Kumamoto 860-8555 (Japan); Karatepe, Engin [Department of Electrical and Electronics Engineering of Ege University, 35100 Bornova-Izmir (Turkey)
2009-12-15
It is crucial to improve the photovoltaic (PV) system efficiency and to develop the reliability of PV generation control systems. There are two ways to increase the efficiency of PV power generation system. The first is to develop materials offering high conversion efficiency at low cost. The second is to operate PV systems optimally. However, the PV system can be optimally operated only at a specific output voltage and its output power fluctuates under intermittent weather conditions. Moreover, it is very difficult to test the performance of a maximum-power point tracking (MPPT) controller under the same weather condition during the development process and also the field testing is costly and time consuming. This paper presents a novel real-time simulation technique of PV generation system by using dSPACE real-time interface system. The proposed system includes Artificial Neural Network (ANN) and fuzzy logic controller scheme using polar information. This type of fuzzy logic rules is implemented for the first time to operate the PV module at optimum operating point. ANN is utilized to determine the optimum operating voltage for monocrystalline silicon, thin-film cadmium telluride and triple junction amorphous silicon solar cells. The verification of availability and stability of the proposed system through the real-time simulator shows that the proposed system can respond accurately for different scenarios and different solar cell technologies. (author)
Primal-Dual Interior Point Multigrid Method for Topology Optimization
Czech Academy of Sciences Publication Activity Database
Kočvara, Michal; Mohammed, S.
2016-01-01
Roč. 38, č. 5 (2016), B685-B709 ISSN 1064-8275 Grant - others:European Commission - EC(XE) 313781 Institutional support: RVO:67985556 Keywords : topology optimization * multigrid method s * interior point method Subject RIV: BA - General Mathematics Impact factor: 2.195, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/kocvara-0462418.pdf
Optimal harvesting of fish stocks under a time-varying discount rate.
Duncan, Stephen; Hepburn, Cameron; Papachristodoulou, Antonis
2011-01-21
Optimal control theory has been extensively used to determine the optimal harvesting policy for renewable resources such as fish stocks. In such optimisations, it is common to maximise the discounted utility of harvesting over time, employing a constant time discount rate. However, evidence from human and animal behaviour suggests that we have evolved to employ discount rates which fall over time, often referred to as "hyperbolic discounting". This increases the weight on benefits in the distant future, which may appear to provide greater protection of resources for future generations, but also creates challenges of time-inconsistent plans. This paper examines harvesting plans when the discount rate declines over time. With a declining discount rate, the planner reduces stock levels in the early stages (when the discount rate is high) and intends to compensate by allowing the stock level to recover later (when the discount rate will be lower). Such a plan may be feasible and optimal, provided that the planner remains committed throughout. However, in practice there is a danger that such plans will be re-optimized and adjusted in the future. It is shown that repeatedly restarting the optimization can drive the stock level down to the point where the optimal policy is to harvest the stock to extinction. In short, a key contribution of this paper is to identify the surprising severity of the consequences flowing from incorporating a rather trivial, and widely prevalent, "non-rational" aspect of human behaviour into renewable resource management models. These ideas are related to the collapse of the Peruvian anchovy fishery in the 1970's. Copyright © 2010 Elsevier Ltd. All rights reserved.
Fixed-Point Configurable Hardware Components
Directory of Open Access Journals (Sweden)
Rocher Romuald
2006-01-01
Full Text Available To reduce the gap between the VLSI technology capability and the designer productivity, design reuse based on IP (intellectual properties is commonly used. In terms of arithmetic accuracy, the generated architecture can generally only be configured through the input and output word lengths. In this paper, a new kind of method to optimize fixed-point arithmetic IP has been proposed. The architecture cost is minimized under accuracy constraints defined by the user. Our approach allows exploring the fixed-point search space and the algorithm-level search space to select the optimized structure and fixed-point specification. To significantly reduce the optimization and design times, analytical models are used for the fixed-point optimization process.
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
Time-optimal control with finite bandwidth
Hirose, M.; Cappellaro, P.
2018-04-01
Time-optimal control theory provides recipes to achieve quantum operations with high fidelity and speed, as required in quantum technologies such as quantum sensing and computation. While technical advances have achieved the ultrastrong driving regime in many physical systems, these capabilities have yet to be fully exploited for the precise control of quantum systems, as other limitations, such as the generation of higher harmonics or the finite response time of the control apparatus, prevent the implementation of theoretical time-optimal control. Here we present a method to achieve time-optimal control of qubit systems that can take advantage of fast driving beyond the rotating wave approximation. We exploit results from time-optimal control theory to design driving protocols that can be implemented with realistic, finite-bandwidth control fields, and we find a relationship between bandwidth limitations and achievable control fidelity.
Time optimal paths for high speed maneuvering
Energy Technology Data Exchange (ETDEWEB)
Reister, D.B.; Lenhart, S.M.
1993-01-01
Recent theoretical results have completely solved the problem of determining the minimum length path for a vehicle with a minimum turning radius moving from an initial configuration to a final configuration. Time optimal paths for a constant speed vehicle are a subset of the minimum length paths. This paper uses the Pontryagin maximum principle to find time optimal paths for a constant speed vehicle. The time optimal paths consist of sequences of axes of circles and straight lines. The maximum principle introduces concepts (dual variables, bang-bang solutions, singular solutions, and transversality conditions) that provide important insight into the nature of the time optimal paths. We explore the properties of the optimal paths and present some experimental results for a mobile robot following an optimal path.
Mehra, R. K.; Washburn, R. B.; Sajan, S.; Carroll, J. V.
1979-01-01
A hierarchical real time algorithm for optimal three dimensional control of aircraft is described. Systematic methods are developed for real time computation of nonlinear feedback controls by means of singular perturbation theory. The results are applied to a six state, three control variable, point mass model of an F-4 aircraft. Nonlinear feedback laws are presented for computing the optimal control of throttle, bank angle, and angle of attack. Real Time capability is assessed on a TI 9900 microcomputer. The breakdown of the singular perturbation approximation near the terminal point is examined Continuation methods are examined to obtain exact optimal trajectories starting from the singular perturbation solutions.
DEFF Research Database (Denmark)
Gong, Hui; Olsen, Flemming Ove
CO2 lasers are increasingly being utilized for quality welding in production. Considering the high cost of equipment, the start-up time and the set-up time should be minimized. Ideally the parameters should be set up and optimized more or less automatically. In this paper a control system...... is designed and built to automatically optimize the focal point position, one of the most important parameters in CO2 laser welding, in order to perform a desired deep/full penetration welding. The control system mainly consists of a multi-axis motion controller - PMAC, a light sensor - Photo Diode, a data...
Two-Stage Chaos Optimization Search Application in Maximum Power Point Tracking of PV Array
Directory of Open Access Journals (Sweden)
Lihua Wang
2014-01-01
Full Text Available In order to deliver the maximum available power to the load under the condition of varying solar irradiation and environment temperature, maximum power point tracking (MPPT technologies have been used widely in PV systems. Among all the MPPT schemes, the chaos method is one of the hot topics in recent years. In this paper, a novel two-stage chaos optimization method is presented which can make search faster and more effective. In the process of proposed chaos search, the improved logistic mapping with the better ergodic is used as the first carrier process. After finding the current optimal solution in a certain guarantee, the power function carrier as the secondary carrier process is used to reduce the search space of optimized variables and eventually find the maximum power point. Comparing with the traditional chaos search method, the proposed method can track the change quickly and accurately and also has better optimization results. The proposed method provides a new efficient way to track the maximum power point of PV array.
Time-Optimal Real-Time Test Case Generation using UPPAAL
DEFF Research Database (Denmark)
Hessel, Anders; Larsen, Kim Guldstrand; Nielsen, Brian
2004-01-01
Testing is the primary software validation technique used by industry today, but remains ad hoc, error prone, and very expensive. A promising improvement is to automatically generate test cases from formal models of the system under test. We demonstrate how to automatically generate real...... test purposes or generated automatically from various coverage criteria of the model.......-time conformance test cases from timed automata specifications. Specifically we demonstrate how to fficiently generate real-time test cases with optimal execution time i.e test cases that are the fastest possible to execute. Our technique allows time optimal test cases to be generated using manually formulated...
Performance of a Nonlinear Real-Time Optimal Control System for HEVs/PHEVs during Car Following
Directory of Open Access Journals (Sweden)
Kaijiang Yu
2014-01-01
Full Text Available This paper presents a real-time optimal control approach for the energy management problem of hybrid electric vehicles (HEVs and plug-in hybrid electric vehicles (PHEVs with slope information during car following. The new features of this study are as follows. First, the proposed method can optimize the engine operating points and the driving profile simultaneously. Second, the proposed method gives the freedom of vehicle spacing between the preceding vehicle and the host vehicle. Third, using the HEV/PHEV property, the desired battery state of charge is designed according to the road slopes for better recuperation of free braking energy. Fourth, all of the vehicle operating modes engine charge, electric vehicle, motor assist and electric continuously variable transmission, and regenerative braking, can be realized using the proposed real-time optimal control approach. Computer simulation results are shown among the nonlinear real-time optimal control approach and the ADVISOR rule-based approach. The conclusion is that the nonlinear real-time optimal control approach is effective for the energy management problem of the HEV/PHEV system during car following.
Optimal configuration of spatial points in the reactor cell
International Nuclear Information System (INIS)
Bosevski, T.
1968-01-01
Optimal configuration of spatial points was chosen in respect to the total number needed for integration of reactions in the reactor cell. Previously developed code VESTERN was used for numerical verification of the method on a standard reactor cell. The code applies the collision probability method for calculating the neutron flux distribution. It is shown that the total number of spatial points is twice smaller than the respective number of spatial zones needed for determination of number of reactions in the cell, with the preset precision. This result shows the direction for further condensing of the procedure for calculating the space-energy distribution of the neutron flux in a reactors cell [sr
Distributed Algorithms for Time Optimal Reachability Analysis
DEFF Research Database (Denmark)
Zhang, Zhengkui; Nielsen, Brian; Larsen, Kim Guldstrand
2016-01-01
. We propose distributed computing to accelerate time optimal reachability analysis. We develop five distributed state exploration algorithms, implement them in \\uppaal enabling it to exploit the compute resources of a dedicated model-checking cluster. We experimentally evaluate the implemented...... algorithms with four models in terms of their ability to compute near- or proven-optimal solutions, their scalability, time and memory consumption and communication overhead. Our results show that distributed algorithms work much faster than sequential algorithms and have good speedup in general.......Time optimal reachability analysis is a novel model based technique for solving scheduling and planning problems. After modeling them as reachability problems using timed automata, a real-time model checker can compute the fastest trace to the goal states which constitutes a time optimal schedule...
Energy Technology Data Exchange (ETDEWEB)
Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); CAS Center for Excellence in Tibetan Plateau Earth Sciences, Beijing, 100101 (China); Badal, José, E-mail: badal@unizar.es [Physics of the Earth, Sciences B, University of Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
International Nuclear Information System (INIS)
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José
2017-01-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
Optimal International Asset Allocation with Time-varying Risk
Flavin, Thomas; Wickens, M.R.
2006-01-01
This paper examines the optimal allocation each period of an internationally diversified portfolio from the different points of view of a UK and a US investor. We find that investor location affects optimal asset allocation. The presence of exchange rate risk causes the markets to appear not fully integrated and creates a preference for home assets. Domestic equity is the dominant asset in the optimal portfolio for both investors, but the US investor bears less risk than the UK...
International Nuclear Information System (INIS)
Yong Choi, Jae; Lee, Minkyung; Jeon, Tae Joo; Choi, Soo-Hee; Choi, Ye Ji; Lee, Yu Kyung; Kim, Jae-Jin; Ryu, Young Hoon
2014-01-01
The purpose of this research is to find optimal acquisition time point of [ 18 F]FCWAY PET for the assessment of serotonin 1A receptor (5-HT 1A ) density. To achieve this goal, we examined the specific-to-nonspecific ratios in various brain regions. The cerebellum has very few 5-HT 1A receptors in the brain, so we set this region as the reference tissue. As a result, specific-to-nonspecific binding ratios in the frontal, temporal cortex and the hippocampus were steadily increased at 90 min after injection and remained stable at 120 min. In addition, the binding ratio of the late time was significantly higher than that of the previous time points. From these results, we recommend that 90 min p.i. is a better single time point for the analysis rather than previous time points for assessing [ 18 F]FCWAY binding to 5-HT 1A receptors. - Highlights: • For routine clinical study, PET protocol should be conducted on a single time point with short imaging acquisition. • The specific-to-nonspecific ratios in the various brain regions were calculated. • Optimal [ 18 F]FCWAY PET acquisition time point was proposed
Time-optimal feedback control for linear systems
International Nuclear Information System (INIS)
Mirica, S.
1976-01-01
The paper deals with the results of qualitative investigations of the time-optimal feedback control for linear systems with constant coefficients. In the first section, after some definitions and notations, two examples are given and it is shown that even the time-optimal control problem for linear systems with constant coefficients which looked like ''completely solved'' requires a further qualitative investigation of the stability to ''permanent perturbations'' of optimal feedback control. In the second section some basic results of the linear time-optimal control problem are reviewed. The third section deals with the definition of Boltyanskii's ''regular synthesis'' and its connection to Filippov's theory of right-hand side discontinuous differential equations. In the fourth section a theorem is proved concerning the stability to perturbations of time-optimal feedback control for linear systems with scalar control. In the last two sections it is proved that, if the matrix which defines the system has only real eigenvalues or is three-dimensional, the time-optimal feedback control defines a regular synthesis and therefore is stable to perturbations. (author)
Optimal bounds and extremal trajectories for time averages in dynamical systems
Tobasco, Ian; Goluskin, David; Doering, Charles
2017-11-01
For systems governed by differential equations it is natural to seek extremal solution trajectories, maximizing or minimizing the long-time average of a given quantity of interest. A priori bounds on optima can be proved by constructing auxiliary functions satisfying certain point-wise inequalities, the verification of which does not require solving the underlying equations. We prove that for any bounded autonomous ODE, the problems of finding extremal trajectories on the one hand and optimal auxiliary functions on the other are strongly dual in the sense of convex duality. As a result, auxiliary functions provide arbitrarily sharp bounds on optimal time averages. Furthermore, nearly optimal auxiliary functions provide volumes in phase space where maximal and nearly maximal trajectories must lie. For polynomial systems, such functions can be constructed by semidefinite programming. We illustrate these ideas using the Lorenz system, producing explicit volumes in phase space where extremal trajectories are guaranteed to reside. Supported by NSF Award DMS-1515161, Van Loo Postdoctoral Fellowships, and the John Simon Guggenheim Foundation.
Rui, MA; Fan, XIA; Fei, LING; Jiaxian, LI
2018-02-01
Real-time equilibrium reconstruction is crucially important for plasma shape control in the process of tokamak plasma discharge. However, as the reconstruction algorithm is computationally intensive, it is very difficult to improve its accuracy and reduce the computation time, and some optimizations need to be done. This article describes the three most important aspects of this optimization: (1) compiler optimization; (2) some optimization for middle-scale matrix multiplication on the graphic processing unit and an algorithm which can solve the block tri-diagonal linear system efficiently in parallel; (3) a new algorithm to locate the X&O point on the central processing unit. A static test proves the correctness and a dynamic test proves the feasibility of using the new code for real-time reconstruction with 129 × 129 grids; it can complete one iteration around 575 μs for each equilibrium reconstruction. The plasma displacements from real-time equilibrium reconstruction are compared with the experimental measurements, and the calculated results are consistent with the measured ones, which can be used as a reference for the real-time control of HL-2A discharge.
Directory of Open Access Journals (Sweden)
Zhiqiang Yang
2016-05-01
Full Text Available Due to the dynamic process of maximum power point tracking (MPPT caused by turbulence and large rotor inertia, variable-speed wind turbines (VSWTs cannot maintain the optimal tip speed ratio (TSR from cut-in wind speed up to the rated speed. Therefore, in order to increase the total captured wind energy, the existing aerodynamic design for VSWT blades, which only focuses on performance improvement at a single TSR, needs to be improved to a multi-point design. In this paper, based on a closed-loop system of VSWTs, including turbulent wind, rotor, drive train and MPPT controller, the distribution of operational TSR and its description based on inflow wind energy are investigated. Moreover, a multi-point method considering the MPPT dynamic process for the aerodynamic optimization of VSWT blades is proposed. In the proposed method, the distribution of operational TSR is obtained through a dynamic simulation of the closed-loop system under a specific turbulent wind, and accordingly the multiple design TSRs and the corresponding weighting coefficients in the objective function are determined. Finally, using the blade of a National Renewable Energy Laboratory (NREL 1.5 MW wind turbine as the baseline, the proposed method is compared with the conventional single-point optimization method using the commercial software Bladed. Simulation results verify the effectiveness of the proposed method.
Optimization of time characteristics in activation analysis
International Nuclear Information System (INIS)
Gurvich, L.G.; Umaraliev, A.T.
2006-01-01
Full text: The activation analysis temporal characteristics optimization methods developed at present are aimed at determination of optimal values of the three important parameters - irradiation time, cooling time and measurement time. In the performed works, especially in [1-5] the activation analysis processes are described, the optimal values of optimization parameters are obtained from equations solved, and the computational results are given for these parameters for a number of elements. However, the equations presented in [2] were inaccurate, did not allow one to have optimization parameters results for one element content calculations, and it did not take into account background dependence of time. Therefore, we proposed modified equations to determine the optimal temporal parameters and iteration processes for the solution of these equations. It is well-known that the activity of studied sample during measurements does not change significantly, i.e. measurement time is much shorter than the half-life, thus the processes taking place can be described by the Poisson probability distribution, and in general case one can apply binomial distribution. The equation and iteration processes use in this research describe both probability distributions. Expectedly, the cooling time iteration expressions obtained for one element analysis case are similar for the both distribution types, as the optimised time values occurred to be of the same order as half-life values, whereas the cooling time, as we observed, depends on the ratio of the studied sample's peak value to the background peak, and can be significantly larger than the half-life value. This pattern is general, and can be derived from the optimized time expressions, which is supported by the experimental data on short-living isotopes [3,4]. For the isotopes with large half-lives, up to years, like cobalt-60, the cooling time values given in the above mentioned works are equal to months which, apparently
Time-optimal control of reactor power
International Nuclear Information System (INIS)
Bernard, J.A.
1987-01-01
Control laws that permit adjustments in reactor power to be made in minimum time and without overshoot have been formulated and demonstrated. These control laws which are derived from the standard and alternate dynamic period equations, are closed-form expressions of general applicability. These laws were deduced by noting that if a system is subject to one or more operating constraints, then the time-optimal response is to move the system along these constraints. Given that nuclear reactors are subject to limitations on the allowed reactor period, a time-optimal control law would step the period from infinity to the minimum allowed value, hold the period at that value for the duration of the transient, and then step the period back to infinity. The change in reactor would therefore be accomplished in minimum time. The resulting control laws are superior to other forms of time-optimal control because they are general-purpose, closed-form expressions that are both mathematically tractable and readily implanted. Moreover, these laws include provisions for the use of feedback. The results of simulation studies and actual experiments on the 5 MWt MIT Research Reactor in which these time-optimal control laws were used successfully to adjust the reactor power are presented
Engineering applications of discrete-time optimal control
DEFF Research Database (Denmark)
Vidal, Rene Victor Valqui; Ravn, Hans V.
1990-01-01
Many problems of design and operation of engineering systems can be formulated as optimal control problems where time has been discretisized. This is also true even if 'time' is not involved in the formulation of the problem, but rather another one-dimensional parameter. This paper gives a review...... of some well-known and new results in discrete time optimal control methods applicable to practical problem solving within engineering. Emphasis is placed on dynamic programming, the classical maximum principle and generalized versions of the maximum principle for optimal control of discrete time systems...
Optimal timing of tracheostomy after trauma without associated head injury.
Keenan, Jeffrey E; Gulack, Brian C; Nussbaum, Daniel P; Green, Cindy L; Vaslef, Steven N; Shapiro, Mark L; Scarborough, John E
2015-10-01
Controversy exists over optimal timing of tracheostomy in patients with respiratory failure after blunt trauma. The study aimed to determine whether the timing of tracheostomy affects mortality in this population. The 2008-2011 National Trauma Data Bank was queried to identify blunt trauma patients without concomitant head injury who required tracheostomy for respiratory failure between hospital days 4 and 21. Restricted cubic spline analysis was performed to evaluate the relationship between tracheostomy timing and the odds of inhospital mortality. The cohort was stratified based on this analysis. Unadjusted characteristics and outcomes were compared. Multivariable logistic regression was used to evaluate the effect of tracheostomy timing on mortality after adjustment for age, gender, race, payor status, level of trauma center, injury severity score, presentation Glasgow coma scale, and thoracic and abdominal abbreviated injury score. There were 9662 patients included in the study. Restricted cubic spline analysis demonstrated a nonlinear relationship between timing of tracheostomy and mortality, with higher odds of mortality occurring with tracheostomy placement within 10 d of admission compared with later time points. The cohort was therefore stratified into early and delayed tracheostomy groups relative to this time point. The resulting groups contained 5402 (55.9%) and 4260 (44.1%) patients, respectively. After multivariable adjustment, the delayed tracheostomy group continued to have significantly reduced odds of mortality (Adjusted odds ratio, 0.82, 95% confidence interval, 0.71-0.95, C-statistic, 0.700). Among non-head injured blunt trauma patients with prolonged respiratory failure, tracheostomy placement within 10 d of admission may result in increased mortality compared with later time points. Copyright © 2015 Elsevier Inc. All rights reserved.
Wang, Tiancai; He, Xing; Huang, Tingwen; Li, Chuandong; Zhang, Wei
2017-09-01
The economic emission dispatch (EED) problem aims to control generation cost and reduce the impact of waste gas on the environment. It has multiple constraints and nonconvex objectives. To solve it, the collective neurodynamic optimization (CNO) method, which combines heuristic approach and projection neural network (PNN), is attempted to optimize scheduling of an electrical microgrid with ten thermal generators and minimize the plus of generation and emission cost. As the objective function has non-derivative points considering valve point effect (VPE), differential inclusion approach is employed in the PNN model introduced to deal with them. Under certain conditions, the local optimality and convergence of the dynamic model for the optimization problem is analyzed. The capability of the algorithm is verified in a complicated situation, where transmission loss and prohibited operating zones are considered. In addition, the dynamic variation of load power at demand side is considered and the optimal scheduling of generators within 24 h is described. Copyright © 2017 Elsevier Ltd. All rights reserved.
Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood
Asadi, A.R.; Roos, C.
2015-01-01
In this paper, we design a class of infeasible interior-point methods for linear optimization based on large neighborhood. The algorithm is inspired by a full-Newton step infeasible algorithm with a linear convergence rate in problem dimension that was recently proposed by the second author.
Mulla, Ameer K.; Patil, Deepak U.; Chakraborty, Debraj
2018-02-01
N identical agents with bounded inputs aim to reach a common target state (consensus) in the minimum possible time. Algorithms for computing this time-optimal consensus point, the control law to be used by each agent and the time taken for the consensus to occur, are proposed. Two types of multi-agent systems are considered, namely (1) coupled single-integrator agents on a plane and, (2) double-integrator agents on a line. At the initial time instant, each agent is assumed to have access to the state information of all the other agents. An algorithm, using convexity of attainable sets and Helly's theorem, is proposed, to compute the final consensus target state and the minimum time to achieve this consensus. Further, parts of the computation are parallelised amongst the agents such that each agent has to perform computations of O(N2) run time complexity. Finally, local feedback time-optimal control laws are synthesised to drive each agent to the target point in minimum time. During this part of the operation, the controller for each agent uses measurements of only its own states and does not need to communicate with any neighbouring agents.
An Optimization Method of Time Window Based on Travel Time and Reliability
Directory of Open Access Journals (Sweden)
Fengjie Fu
2015-01-01
Full Text Available The dynamic change of urban road travel time was analyzed using video image detector data, and it showed cyclic variation, so the signal cycle length at the upstream intersection was conducted as the basic unit of time window; there was some evidence of bimodality in the actual travel time distributions; therefore, the fitting parameters of the travel time bimodal distribution were estimated using the EM algorithm. Then the weighted average value of the two means was indicated as the travel time estimation value, and the Modified Buffer Time Index (MBIT was expressed as travel time variability; based on the characteristics of travel time change and MBIT along with different time windows, the time window was optimized dynamically for minimum MBIT, requiring that the travel time change be lower than the threshold value and traffic incidents can be detected real time; finally, travel times on Shandong Road in Qingdao were estimated every 10 s, 120 s, optimal time windows, and 480 s and the comparisons demonstrated that travel time estimation in optimal time windows can exactly and steadily reflect the real-time traffic. It verifies the effectiveness of the optimization method.
On Implementing a Homogeneous Interior-Point Algorithm for Nonsymmetric Conic Optimization
DEFF Research Database (Denmark)
Skajaa, Anders; Jørgensen, John Bagterp; Hansen, Per Christian
Based on earlier work by Nesterov, an implementation of a homogeneous infeasible-start interior-point algorithm for solving nonsymmetric conic optimization problems is presented. Starting each iteration from (the vicinity of) the central path, the method computes (nearly) primal-dual symmetric...... approximate tangent directions followed by a purely primal centering procedure to locate the next central primal-dual point. Features of the algorithm include that it makes use only of the primal barrier function, that it is able to detect infeasibilities in the problem and that no phase-I method is needed...
Novel Verification Method for Timing Optimization Based on DPSO
Directory of Open Access Journals (Sweden)
Chuandong Chen
2018-01-01
Full Text Available Timing optimization for logic circuits is one of the key steps in logic synthesis. Extant research data are mainly proposed based on various intelligence algorithms. Hence, they are neither comparable with timing optimization data collected by the mainstream electronic design automation (EDA tool nor able to verify the superiority of intelligence algorithms to the EDA tool in terms of optimization ability. To address these shortcomings, a novel verification method is proposed in this study. First, a discrete particle swarm optimization (DPSO algorithm was applied to optimize the timing of the mixed polarity Reed-Muller (MPRM logic circuit. Second, the Design Compiler (DC algorithm was used to optimize the timing of the same MPRM logic circuit through special settings and constraints. Finally, the timing optimization results of the two algorithms were compared based on MCNC benchmark circuits. The timing optimization results obtained using DPSO are compared with those obtained from DC, and DPSO demonstrates an average reduction of 9.7% in the timing delays of critical paths for a number of MCNC benchmark circuits. The proposed verification method directly ascertains whether the intelligence algorithm has a better timing optimization ability than DC.
Solar sail time-optimal interplanetary transfer trajectory design
International Nuclear Information System (INIS)
Gong Shengpin; Gao Yunfeng; Li Junfeng
2011-01-01
The fuel consumption associated with some interplanetary transfer trajectories using chemical propulsion is not affordable. A solar sail is a method of propulsion that does not consume fuel. Transfer time is one of the most pressing problems of solar sail transfer trajectory design. This paper investigates the time-optimal interplanetary transfer trajectories to a circular orbit of given inclination and radius. The optimal control law is derived from the principle of maximization. An indirect method is used to solve the optimal control problem by selecting values for the initial adjoint variables, which are normalized within a unit sphere. The conditions for the existence of the time-optimal transfer are dependent on the lightness number of the sail and the inclination and radius of the target orbit. A numerical method is used to obtain the boundary values for the time-optimal transfer trajectories. For the cases where no time-optimal transfer trajectories exist, first-order necessary conditions of the optimal control are proposed to obtain feasible solutions. The results show that the transfer time decreases as the minimum distance from the Sun decreases during the transfer duration. For a solar sail with a small lightness number, the transfer time may be evaluated analytically for a three-phase transfer trajectory. The analytical results are compared with previous results and the associated numerical results. The transfer time of the numerical result here is smaller than the transfer time from previous results and is larger than the analytical result.
Uher, Vojtěch; Gajdoš, Petr; Radecký, Michal; Snášel, Václav
2016-01-01
The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds.
Benchmarking of radiological departments. Starting point for successful process optimization
International Nuclear Information System (INIS)
Busch, Hans-Peter
2010-01-01
Continuous optimization of the process of organization and medical treatment is part of the successful management of radiological departments. The focus of this optimization can be cost units such as CT and MRI or the radiological parts of total patient treatment. Key performance indicators for process optimization are cost- effectiveness, service quality and quality of medical treatment. The potential for improvements can be seen by comparison (benchmark) with other hospitals and radiological departments. Clear definitions of key data and criteria are absolutely necessary for comparability. There is currently little information in the literature regarding the methodology and application of benchmarks especially from the perspective of radiological departments and case-based lump sums, even though benchmarking has frequently been applied to radiological departments by hospital management. The aim of this article is to describe and discuss systematic benchmarking as an effective starting point for successful process optimization. This includes the description of the methodology, recommendation of key parameters and discussion of the potential for cost-effectiveness analysis. The main focus of this article is cost-effectiveness (efficiency and effectiveness) with respect to cost units and treatment processes. (orig.)
Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization
Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.
2009-01-01
We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step,
Optimal scheduling using priced timed automata
DEFF Research Database (Denmark)
Behrmann, Gerd; Larsen, Kim Guldstrand; Rasmussen, Jacob Illum
2005-01-01
This contribution reports on the considerable effort made recently towards extending and applying well-established timed automata technology to optimal scheduling and planning problems. The effort of the authors in this direction has to a large extent been carried out as part of the European...... projects VHS [20] and AMETIST [16] and are available in the recently released UPPAAL CORA [12], a variant of the real-time verification tool UPPAAL [18, 5] specialized for cost-optimal reachability for the extended model of so-called priced timed automata....
Directory of Open Access Journals (Sweden)
Yutong Liu
2012-01-01
Full Text Available Purpose. To develop a technique to automate landmark selection for point-based interpolating transformations for nonlinear medical image registration. Materials and Methods. Interpolating transformations were calculated from homologous point landmarks on the source (image to be transformed and target (reference image. Point landmarks are placed at regular intervals on contours of anatomical features, and their positions are optimized along the contour surface by a function composed of curvature similarity and displacements of the homologous landmarks. The method was evaluated in two cases (=5 each. In one, MRI was registered to histological sections; in the second, geometric distortions in EPI MRI were corrected. Normalized mutual information and target registration error were calculated to compare the registration accuracy of the automatically and manually generated landmarks. Results. Statistical analyses demonstrated significant improvement (<0.05 in registration accuracy by landmark optimization in most data sets and trends towards improvement (<0.1 in others as compared to manual landmark selection.
Time Optimal Reachability Analysis Using Swarm Verification
DEFF Research Database (Denmark)
Zhang, Zhengkui; Nielsen, Brian; Larsen, Kim Guldstrand
2016-01-01
Time optimal reachability analysis employs model-checking to compute goal states that can be reached from an initial state with a minimal accumulated time duration. The model-checker may produce a corresponding diagnostic trace which can be interpreted as a feasible schedule for many scheduling...... and planning problems, response time optimization etc. We propose swarm verification to accelerate time optimal reachability using the real-time model-checker Uppaal. In swarm verification, a large number of model checker instances execute in parallel on a computer cluster using different, typically randomized...... search strategies. We develop four swarm algorithms and evaluate them with four models in terms scalability, and time- and memory consumption. Three of these cooperate by exchanging costs of intermediate solutions to prune the search using a branch-and-bound approach. Our results show that swarm...
Multiobjective Optimization for Electronic Circuit Design in Time and Frequency Domains
Directory of Open Access Journals (Sweden)
J. Dobes
2013-04-01
Full Text Available The multiobjective optimization provides an extraordinary opportunity for the finest design of electronic circuits because it allows to mathematically balance contradictory requirements together with possible constraints. In this paper, an original and substantial improvement of an existing method for the multiobjective optimization known as GAM (Goal Attainment Method is suggested. In our proposal, the GAM algorithm itself is combined with a procedure that automatically provides a set of parameters -- weights, coordinates of the reference point -- for which the method generates noninferior solutions uniformly spread over an appropriately selected part of the Pareto front. Moreover, the resulting set of obtained solutions is then presented in a suitable graphic form so that the solution representing the most satisfactory tradeoff can be easily chosen by the designer. Our system generates various types of plots that conveniently characterize results of up to four-dimensional problems. Technically, the procedures of the multiobjective optimization were created as a software add-on to the CIA (Circuit Interactive Analyzer program. This way enabled us to utilize many powerful features of this program, including the sensitivity analyses in time and frequency domains. As a result, the system is also able to perform the multiobjective optimization in the time domain and even highly nonlinear circuits can be significantly improved by our program. As a demonstration of this feature, a multiobjective optimization of a C-class power amplifier in the time domain is thoroughly described in the paper. Further, a four-dimensional optimization of a video amplifier is demonstrated with an original graphic representation of the Pareto front, and also some comparison with the weighting method is done. As an example of improving noise properties, a multiobjective optimization of a low-noise amplifier is performed, and the results in the frequency domain are shown
The Optimal Timing of Strategic Action – A Real Options Approach
Directory of Open Access Journals (Sweden)
Gordon G. Sollars
2012-01-01
Full Text Available he possibility of a first-mover advantage arises in a variety of strategic choices, including product introductions, business start-ups, and mergers and acquisitions. The strategic management literature reflects ambiguity regarding the likelihood that a first mover can or will capture additional value. This paper uses a real options approach to address the optimal timing of strategic moves. Previous studies have modeled real options using either a perpetual or a European financial option. With these models, a strategic choice could only be made either without respect to a time frame (perpetual or at a fixed point in time (European option. Neither case is realistic. Companies typically have strategic options with only a limited time frame due to market factors, but companies may choose to act at any time within that constraint. To reflect this reality, we adapt a method for valuing an American financial option on a dividend paying stock to the real options context. The method presented in this paper proposes a solution for the optimum value for a project that should trigger a strategic choice, and highlights the value lost by not acting optimally. We use simulation results to show that the time frame available to make a strategic choice has an important effect on both the project value for when action should be taken, as well as on the value of waiting to invest at the optimal time. The results presented in this paper help to clarify the ambiguity that is found in the strategic management literature regarding the possibility of obtaining a first-mover advantage. Indeed, a first mover that acts sub-optimally could incur losses or at least not gain any advantage. A first mover that waits to invest at the right time based on the superior information supplied by models based on real options could be better positioned to obtain the benefits that might come from the first move.
Nemati, Mahdieh; Santos, Abel
2018-01-01
Herein, we present an innovative strategy for optimizing hierarchical structures of nanoporous anodic alumina (NAA) to advance their optical sensing performance toward multi-analyte biosensing. This approach is based on the fabrication of multilayered NAA and the formation of differential effective medium of their structure by controlling three fabrication parameters (i.e., anodization steps, anodization time, and pore widening time). The rationale of the proposed concept is that interferometric bilayered NAA (BL-NAA), which features two layers of different pore diameters, can provide distinct reflectometric interference spectroscopy (RIfS) signatures for each layer within the NAA structure and can therefore potentially be used for multi-point biosensing. This paper presents the structural fabrication of layered NAA structures, and the optimization and evaluation of their RIfS optical sensing performance through changes in the effective optical thickness (EOT) using quercetin as a model molecule. The bilayered or funnel-like NAA structures were designed with the aim of characterizing the sensitivity of both layers of quercetin molecules using RIfS and exploring the potential of these photonic structures, featuring different pore diameters, for simultaneous size-exclusion and multi-analyte optical biosensing. The sensing performance of the prepared NAA platforms was examined by real-time screening of binding reactions between human serum albumin (HSA)-modified NAA (i.e., sensing element) and quercetin (i.e., analyte). BL-NAAs display a complex optical interference spectrum, which can be resolved by fast Fourier transform (FFT) to monitor the EOT changes, where three distinctive peaks were revealed corresponding to the top, bottom, and total layer within the BL-NAA structures. The spectral shifts of these three characteristic peaks were used as sensing signals to monitor the binding events in each NAA pore in real-time upon exposure to different concentrations of
Multi-point optimization of recirculation flow type casing treatment in centrifugal compressors
Tun, Min Thaw; Sakaguchi, Daisaku
2016-06-01
High-pressure ratio and wide operating range are highly required for a turbocharger in diesel engines. A recirculation flow type casing treatment is effective for flow range enhancement of centrifugal compressors. Two ring grooves on a suction pipe and a shroud casing wall are connected by means of an annular passage and stable recirculation flow is formed at small flow rates from the downstream groove toward the upstream groove through the annular bypass. The shape of baseline recirculation flow type casing is modified and optimized by using a multi-point optimization code with a metamodel assisted evolutionary algorithm embedding a commercial CFD code CFX from ANSYS. The numerical optimization results give the optimized design of casing with improving adiabatic efficiency in wide operating flow rate range. Sensitivity analysis of design parameters as a function of efficiency has been performed. It is found that the optimized casing design provides optimized recirculation flow rate, in which an increment of entropy rise is minimized at grooves and passages of the rotating impeller.
Merrill, S.; Horowitz, J.; Traino, A. C.; Chipkin, S. R.; Hollot, C. V.; Chait, Y.
2011-02-01
Calculation of the therapeutic activity of radioiodine 131I for individualized dosimetry in the treatment of Graves' disease requires an accurate estimate of the thyroid absorbed radiation dose based on a tracer activity administration of 131I. Common approaches (Marinelli-Quimby formula, MIRD algorithm) use, respectively, the effective half-life of radioiodine in the thyroid and the time-integrated activity. Many physicians perform one, two, or at most three tracer dose activity measurements at various times and calculate the required therapeutic activity by ad hoc methods. In this paper, we study the accuracy of estimates of four 'target variables': time-integrated activity coefficient, time of maximum activity, maximum activity, and effective half-life in the gland. Clinical data from 41 patients who underwent 131I therapy for Graves' disease at the University Hospital in Pisa, Italy, are used for analysis. The radioiodine kinetics are described using a nonlinear mixed-effects model. The distributions of the target variables in the patient population are characterized. Using minimum root mean squared error as the criterion, optimal 1-, 2-, and 3-point sampling schedules are determined for estimation of the target variables, and probabilistic bounds are given for the errors under the optimal times. An algorithm is developed for computing the optimal 1-, 2-, and 3-point sampling schedules for the target variables. This algorithm is implemented in a freely available software tool. Taking into consideration 131I effective half-life in the thyroid and measurement noise, the optimal 1-point time for time-integrated activity coefficient is a measurement 1 week following the tracer dose. Additional measurements give only a slight improvement in accuracy.
Energy Technology Data Exchange (ETDEWEB)
Zhu, Liang; Xue, Hua-dan; Liu, Wei; Wang, Xuan; Sun, Hao; Li, Ping; Jin, Zheng-yu [Peking Union Medical College Hospital, Department of Radiology, Beijing (China); Wu, Wen-ming; Zhao, Yu-pei [Peking Union Medical College Hospital, Department of General Surgery, Beijing (China)
2017-08-15
To assess enhancement patterns of sporadic insulinomas on volume perfusion CT (VPCT), and to identify timing of optimal tumour-parenchyma contrast. Consecutive patients who underwent VPCT for clinically suspected insulinomas were retrospectively identified. Patients with insulinomas confirmed by surgery were included, and patients with familial syndromes were excluded. Two radiologists evaluated VPCT images in consensus. Tumour-parenchyma contrast at each time point was measured, and timing of optimal contrast was determined. Time duration of hyperenhancement (tumour-parenchyma contrast >20 Hounsfield units, HU) was recorded. Perfusion parameters were evaluated. Three dynamic enhancement patterns were observed in 63 tumours: persistent hyperenhancement (hyperenhancement time window ≥10 s) in 39 (61.9%), transient hyperenhancement (hyperenhancement <10 s) in 19 (30.2%) and non-hyperenhancement in 5 (7.9%). Timing of optimal contrast was 9 s after abdominal aorta threshold (AAT) of 200 HU, with tumour-parenchyma contrast of 77.6 ± 57.2 HU. At 9 s after AAT, 14 (22.2%) tumours were non-hyperenhancing, nine of which had missed transient hyperenhancement. Insulinomas with transient and persistent hyperenhancement patterns had significantly increased perfusion. Insulinomas have variable enhancement patterns. Tumour-parenchyma contrast is time-dependent. Optimal timing of enhancement is 9 s after AAT. VPCT enables tumour detection even if the hyperenhancement is transient. (orig.)
Optimization Algorithms for Calculation of the Joint Design Point in Parallel Systems
DEFF Research Database (Denmark)
Enevoldsen, I.; Sørensen, John Dalsgaard
1992-01-01
In large structures it is often necessary to estimate the reliability of the system by use of parallel systems. Optimality criteria-based algorithms for calculation of the joint design point in a parallel system are described and efficient active set strategies are developed. Three possible...
Energy Optimization Assessment at U.S. Army Installations: West Point Military Academy, NY
2008-09-01
chillers to work unnecessarily more than needed. Other buildings had setpoints at different areas above 55 °F. Many buildings are air-conditioned and... optimal . The cost of 12.5 cents/KWh makes it unlikely, especially where steam adsorption chillers exist. 11.8.2 Solution Use the existing steam...ER D C/ CE R L TR -0 8 -1 4 Energy Optimization Assessment at U.S. Army Installations West Point Military Academy, NY David M
Zhang, Lifeng; Zhang, Hui; Wang, Lin; Liu, Yanyan; Sun, Xianyue; Li, Lingyan; Hou, Jing
2015-01-01
By using orthogonal design method to optimnize prescription of pulsed electric field at Jiaji (EX- B 2) points for spinal cord injury (SCI). Fifty six patients of SCI were selected, in which 36 cases were divided into orthogonal design trial and 20 cases were into clinical verification. With 36 patients who received orthogonal design trial, Frankel grading scale was used as observation index to screen optimal prescription of pulsed electric field. Pulse frequency (factor A) included low frequency (factor A(I), 10(2) Hz). moderate frequency (factor A(II), 10(4) Hz) and high frequency (factor A(III), 10(3) Hz); pulse amplitude (factor B) included 0-30 V (factor B ), 0-60 V (factor B(II)) and 0-90 V (factor B(III)); pulse width (factor C) included 0.1 ms (factor C(I)). 0.6 ms (factor C(II)) and 0.9 ms (factor C(III)); acupuncture time (factor D) included one month (DI), three months (D(II)) and five months (D(III)). Twenty patients were used for clinical efficacy observation and the effects of screened optimal pre scription of pulsed electric field at Jiaji (EX-B 2) points combined with regular rehabilitation training on spasm se- verity, score of sensory and motor functions, Barthel index and Frankel score were observed. (1) As results of orthogonal design trial, the optimal prescription was A(III) B(III), C(I), D(III), which were high frequency (10(3) Hz), 0-90 V of pulse amplitude, 0.4 ms of pulse width and 5 months of treatment time. (2) As results of 20 patient clinical verification, Ashworth score, tendon reflex and clonus were all significantly improved (Ppulsed electric field at Jiaji (EX-B 2) points for spinal cord injury is high frequency (10& Hz), 0-90 V of pulse amplitude, 0.4 ms of pulse width and 5 months of treatment time. The optimal prescription of pulsed electric field at Jiaji (EX-B 2) points combined with regular rehabilitation could obviously improve spasm severity, enhance senso- ry and motor functions, and ameliorate activity of daily life and
Directory of Open Access Journals (Sweden)
Ahmed M. Ali
2018-02-01
Full Text Available In light of increasing alerts about limited energy sources and environment degradation, it has become essential to search for alternatives to thermal engine-based vehicles which are a major source of air pollution and fossil fuel depletion. Hybrid electric vehicles (HEVs, encompassing multiple energy sources, are a short-term solution that meets the performance requirements and contributes to fuel saving and emission reduction aims. Power management methods such as regulating efficient energy flow to the vehicle propulsion, are core technologies of HEVs. Intelligent power management methods, capable of acquiring optimal power handling, accommodating system inaccuracies, and suiting real-time applications can significantly improve the powertrain efficiency at different operating conditions. Rule-based methods are simply structured and easily implementable in real-time; however, a limited optimality in power handling decisions can be achieved. Optimization-based methods are more capable of achieving this optimality at the price of augmented computational load. In the last few years, these optimization-based methods have been under development to suit real-time application using more predictive, recognitive, and artificial intelligence tools. This paper presents a review-based discussion about these new trends in real-time optimal power management methods. More focus is given to the adaptation tools used to boost methods optimality in real-time. The contribution of this work can be identified in two points: First, to provide researchers and scholars with an overview of different power management methods. Second, to point out the state-of-the-art trends in real-time optimal methods and to highlight promising approaches for future development.
Time-optimal control of infinite order distributed parabolic systems involving time lags
Directory of Open Access Journals (Sweden)
G.M. Bahaa
2014-06-01
Full Text Available A time-optimal control problem for linear infinite order distributed parabolic systems involving constant time lags appear both in the state equation and in the boundary condition is presented. Some particular properties of the optimal control are discussed.
Brown, Jonathan M.; Petersen, Jeremy D.
2014-01-01
NASA's WIND mission has been operating in a large amplitude Lissajous orbit in the vicinity of the interior libration point of the Sun-Earth/Moon system since 2004. Regular stationkeeping maneuvers are required to maintain the orbit due to the instability around the collinear libration points. Historically these stationkeeping maneuvers have been performed by applying an incremental change in velocity, or (delta)v along the spacecraft-Sun vector as projected into the ecliptic plane. Previous studies have shown that the magnitude of libration point stationkeeping maneuvers can be minimized by applying the (delta)v in the direction of the local stable manifold found using dynamical systems theory. This paper presents the analysis of this new maneuver strategy which shows that the magnitude of stationkeeping maneuvers can be decreased by 5 to 25 percent, depending on the location in the orbit where the maneuver is performed. The implementation of the optimized maneuver method into operations is discussed and results are presented for the first two optimized stationkeeping maneuvers executed by WIND.
History and Point in Time in Enterprise Applications
Directory of Open Access Journals (Sweden)
Constantin Gelu APOSTOL
2006-01-01
Full Text Available First part points out the main differences between temporal and non-temporal databases. In the second part, based on identification of the three main categories of time involved in database applications: user-defined time, valid time and transaction time, some relevant solutions for their implementation are discussed, mainly from the point of view of database organization and data access level of enterprise applications. The final part is dedicated to the influences of historical data in the business logic and presentation levels of enterprise applications and in application services, as security, workflow, reporting.
Optimal Control Of Nonlinear Wave Energy Point Converters
DEFF Research Database (Denmark)
Nielsen, Søren R.K.; Zhou, Qiang; Kramer, Morten
2013-01-01
idea behind the control strategy is to enforce the stationary velocity response of the absorber into phase with the wave excitation force at any time. The controller is optimal under monochromatic wave excitation. It is demonstrated that the devised causal controller, in plane irregular sea states...
Real-Time Optimization for Economic Model Predictive Control
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Edlund, Kristian; Frison, Gianluca
2012-01-01
In this paper, we develop an efficient homogeneous and self-dual interior-point method for the linear programs arising in economic model predictive control. To exploit structure in the optimization problems, the algorithm employs a highly specialized Riccati iteration procedure. Simulations show...
Homeless Point-In-Time (2007-2016)
City and County of Durham, North Carolina — These raw data sets contain Point-in-Time (PIT) estimates and national PIT estimates of homelessness as well as national estimates of homelessness by state and...
Optimal, real-time control--colliders
International Nuclear Information System (INIS)
Spencer, J.E.
1991-05-01
With reasonable definitions, optimal control is possible for both classical and quantal systems with new approaches called PISC(Parallel) and NISC(Neural) from analogy with RISC (Reduced Instruction Set Computing). If control equals interaction, observation and comparison to some figure of merit with interaction via external fields, then optimization comes from varying these fields to give design or operating goals. Structural stability can then give us tolerance and design constraints. But simulations use simplified models, are not in real-time and assume fixed or stationary conditions, so optimal control goes far beyond convergence rates of algorithms. It is inseparable from design and this has many implications for colliders. 12 refs., 3 figs
Ferrer-Mileo, V; Guede-Fernandez, F; Fernandez-Chimeno, M; Ramos-Castro, J; Garcia-Gonzalez, M A
2015-08-01
This work compares several fiducial points to detect the arrival of a new pulse in a photoplethysmographic signal using the built-in camera of smartphones or a photoplethysmograph. Also, an optimization process for the signal preprocessing stage has been done. Finally we characterize the error produced when we use the best cutoff frequencies and fiducial point for smartphones and photopletysmograph and compare if the error of smartphones can be reasonably be explained by variations in pulse transit time. The results have revealed that the peak of the first derivative and the minimum of the second derivative of the pulse wave have the lowest error. Moreover, for these points, high pass filtering the signal between 0.1 to 0.8 Hz and low pass around 2.7 Hz or 3.5 Hz are the best cutoff frequencies. Finally, the error in smartphones is slightly higher than in a photoplethysmograph.
Optimizing departure times in vehicle routes
Kok, A.L.; Hans, Elias W.; Schutten, Johannes M.J.
2008-01-01
Most solution methods for the vehicle routing problem with time windows (VRPTW) develop routes from the earliest feasible departure time. However, in practice, temporal traffic congestions make that such solutions are not optimal with respect to minimizing the total duty time. Furthermore, VRPTW
Constructing an optimal decision tree for FAST corner point detection
Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail
2011-01-01
In this paper, we consider a problem that is originated in computer vision: determining an optimal testing strategy for the corner point detection problem that is a part of FAST algorithm [11,12]. The problem can be formulated as building a decision tree with the minimum average depth for a decision table with all discrete attributes. We experimentally compare performance of an exact algorithm based on dynamic programming and several greedy algorithms that differ in the attribute selection criterion. © 2011 Springer-Verlag.
Optimizing Departure Times in Vehicle Routes
Kok, A.L.; Hans, Elias W.; Schutten, Johannes M.J.
2011-01-01
Most solution methods for the vehicle routing problem with time windows (VRPTW) develop routes from the earliest feasible departure time. In practice, however, temporary traffic congestion make such solutions non-optimal with respect to minimizing the total duty time. Furthermore, the VRPTW does not
Directory of Open Access Journals (Sweden)
Mahdieh Nemati
2018-02-01
Full Text Available Herein, we present an innovative strategy for optimizing hierarchical structures of nanoporous anodic alumina (NAA to advance their optical sensing performance toward multi-analyte biosensing. This approach is based on the fabrication of multilayered NAA and the formation of differential effective medium of their structure by controlling three fabrication parameters (i.e., anodization steps, anodization time, and pore widening time. The rationale of the proposed concept is that interferometric bilayered NAA (BL-NAA, which features two layers of different pore diameters, can provide distinct reflectometric interference spectroscopy (RIfS signatures for each layer within the NAA structure and can therefore potentially be used for multi-point biosensing. This paper presents the structural fabrication of layered NAA structures, and the optimization and evaluation of their RIfS optical sensing performance through changes in the effective optical thickness (EOT using quercetin as a model molecule. The bilayered or funnel-like NAA structures were designed with the aim of characterizing the sensitivity of both layers of quercetin molecules using RIfS and exploring the potential of these photonic structures, featuring different pore diameters, for simultaneous size-exclusion and multi-analyte optical biosensing. The sensing performance of the prepared NAA platforms was examined by real-time screening of binding reactions between human serum albumin (HSA-modified NAA (i.e., sensing element and quercetin (i.e., analyte. BL-NAAs display a complex optical interference spectrum, which can be resolved by fast Fourier transform (FFT to monitor the EOT changes, where three distinctive peaks were revealed corresponding to the top, bottom, and total layer within the BL-NAA structures. The spectral shifts of these three characteristic peaks were used as sensing signals to monitor the binding events in each NAA pore in real-time upon exposure to different
Energy Technology Data Exchange (ETDEWEB)
Kim, Y [University Of Iowa, College of Medicine, Iowa City, IN (United States)
2016-06-15
Purpose: To test the impact of the use of apex optimization points for new vaginal cylinder (VC) applicators. Methods: New “ClickFit” single channel VC applicators (Varian) that have a different top thicknesses but the same diameters as the old VC applicators (2.3 cm diameter, 2.6 cm, 3.0 cm, and 3.5 cm) were compared using phantom studies. Old VC applicator plans without apex optimization points were also compared to the plans with the optimization points. The apex doses were monitored at 5 mm depth doses (8 points) where a prescription dose (Rx) of 6Gy was prescribed. VC surface doses (8 points) were also analyzed. Results: The new VC applicator plans without apex optimization points presented significantly lower 5mm depth doses than Rx (on average −31 ± 7%, p <0.00001) due to their thicker VC tops (3.4 ± 1.1 mm thicker with the range of 1.2 to 4.4 mm) than the old VC applicators. Old VC applicator plans also showed a statistically significant reduction (p <0.00001) due to Ir-192 source anisotropic effect at the apex region but the % reduction over Rx was only −7 ± 9%. However, by adding apex optimization points to the new VC applicator plans, the plans improved 5 mm depth doses (−7 ± 9% over Rx) that were not statistically different from old VC plans (p = 0.923), along with apex VC surface doses (−22 ± 10% over old VC versus −46 ± 7% without using apex optimization points). Conclusion: The use of apex optimization points are important in order to avoid significant additional cold doses (−24 ± 2%) at the prescription depth (5 mm) of apex, especially for the new VC applicators that have thicker tops.
Harmening, Corinna; Neuner, Hans
2016-09-01
Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.
Distributed Optimization System
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2004-11-30
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Beheshti, Mohsen; Paymani, Zeinab; Brilhante, Joana; Geinitz, Hans; Gehring, Daniela; Leopoldseder, Thomas; Wouters, Ludovic; Pirich, Christian; Loidl, Wolfgang; Langsteger, Werner
2018-07-01
In this prospective study, we evaluated the optimal time-point for 68 Ga-PSMA-11 PET/CT acquisition in the assessment of prostate cancer. We also examined, for the first time the feasibility of tracer production using a PSMA-11 sterile cold-kit in the clinical workflow of PET/CT centres. Fifty prostate cancer patients (25 staging, 25 biochemical recurrence) were enrolled in this study. All patients received an intravenous dose of 2.0 MBq/kg body weight 68 Ga-PSMA-11 prepared using a sterile cold kit (ANMI SA, Liege, Belgium), followed by an early (20 min after injection) semi-whole-body PET/CT scan and a standard-delay (100 min after injection) abdominopelvic PET/CT scan. The detection rates with 68 Ga-PSMA-11 were compared between the two acquisitions. The pattern of physiological background activity and tumour to background ratio were also analysed. The total preparation time was reduced to 5 min using the PSMA-11 sterile cold kit, which improved the final radionuclide activity by about 30% per single 68 Ge/ 68 Ga generator elution. Overall, 158 pathological lesions were analysed in 45 patients (90%) suggestive of malignancy on both (early and standard-delay) 68 Ga-PSMA PET/CT images. There was a significant (p PET/CT imaging seems to provide a detection rate comparable with that of standard-delay imaging. Furthermore, the shorter preparation time using the 68 Ga-PSMA-11 sterile cold kit and promising value of early PET/CT scanning could allow tailoring of imaging protocols which may reduce the costs and improve the time efficiency in PET/CT centres.
Discrete-time optimal control and games on large intervals
Zaslavski, Alexander J
2017-01-01
Devoted to the structure of approximate solutions of discrete-time optimal control problems and approximate solutions of dynamic discrete-time two-player zero-sum games, this book presents results on properties of approximate solutions in an interval that is independent lengthwise, for all sufficiently large intervals. Results concerning the so-called turnpike property of optimal control problems and zero-sum games in the regions close to the endpoints of the time intervals are the main focus of this book. The description of the structure of approximate solutions on sufficiently large intervals and its stability will interest graduate students and mathematicians in optimal control and game theory, engineering, and economics. This book begins with a brief overview and moves on to analyze the structure of approximate solutions of autonomous nonconcave discrete-time optimal control Lagrange problems.Next the structures of approximate solutions of autonomous discrete-time optimal control problems that are discret...
On the Optimization of Point Absorber Buoys
Directory of Open Access Journals (Sweden)
Linnea Sjökvist
2014-05-01
Full Text Available A point absorbing wave energy converter (WEC is a complicated dynamical system. A semi-submerged buoy drives a power take-off device (PTO, which acts as a linear or non-linear damper of the WEC system. The buoy motion depends on the buoy geometry and dimensions, the mass of the moving parts of the system and on the damping force from the generator. The electromagnetic damping in the generator depends on both the generator specifications, the connected load and the buoy velocity. In this paper a velocity ratio has been used to study how the geometric parameters buoy draft and radius, assuming constant generator damping coefficient, affects the motion and the energy absorption of a WEC. It have been concluded that an optimal buoy geometry can be identified for a specific generator damping. The simulated WEC performance have been compared with experimental values from two WECs with similar generators but different buoys. Conclusions have been drawn about their behaviour.
Attention flexibly trades off across points in time.
Denison, Rachel N; Heeger, David J; Carrasco, Marisa
2017-08-01
Sensory signals continuously enter the brain, raising the question of how perceptual systems handle this constant flow of input. Attention to an anticipated point in time can prioritize visual information at that time. However, how we voluntarily attend across time when there are successive task-relevant stimuli has been barely investigated. We developed a novel experimental protocol that allowed us to assess, for the first time, both the benefits and costs of voluntary temporal attention when perceiving a short sequence of two or three visual targets with predictable timing. We found that when humans directed attention to a cued point in time, their ability to perceive orientation was better at that time but also worse earlier and later. These perceptual tradeoffs across time are analogous to those found across space for spatial attention. We concluded that voluntary attention is limited, and selective, across time.
Directory of Open Access Journals (Sweden)
Sangjun Park
2014-01-01
Full Text Available We consider a two-stage supply chain with one supplier and one retailer. The retailer sells a product to customer and the supplier provides a product in a make-to-order mode. In this case, the supplier’s decisions on service time and service level and the retailer’s decision on retail price have effects on customer demand. We develop optimization models to determine the optimal retail price, the optimal guaranteed service time, the optimal service level, and the optimal capacity to maximize the expected profit of the whole supply chain. The results of numerical experiments show that it is more profitable to determine the optimal price, the optimal guaranteed service time, and the optimal service level simultaneously and the proposed model is more profitable in service level sensitive market.
Optimal design and use of retry in fault tolerant real-time computer systems
Lee, Y. H.; Shin, K. G.
1983-01-01
A new method to determin an optimal retry policy and for use in retry of fault characterization is presented. An optimal retry policy for a given fault characteristic, which determines the maximum allowable retry durations to minimize the total task completion time was derived. The combined fault characterization and retry decision, in which the characteristics of fault are estimated simultaneously with the determination of the optimal retry policy were carried out. Two solution approaches were developed, one based on the point estimation and the other on the Bayes sequential decision. The maximum likelihood estimators are used for the first approach, and the backward induction for testing hypotheses in the second approach. Numerical examples in which all the durations associated with faults have monotone hazard functions, e.g., exponential, Weibull and gamma distributions are presented. These are standard distributions commonly used for modeling analysis and faults.
Directory of Open Access Journals (Sweden)
A. Afghan-Toloee
2013-09-01
Full Text Available The problem of specifying the minimum number of sensors to deploy in a certain area to face multiple targets has been generally studied in the literatures. In this paper, we are arguing the multi-sensors deployment problem (MDP. The Multi-sensor placement problem can be clarified as minimizing the cost required to cover the multi target points in the area. We propose a more feasible method for the multi-sensor placement problem. Our method makes provision the high coverage of grid based placements while minimizing the cost as discovered in perimeter placement techniques. The NICA algorithm as improved ICA (Imperialist Competitive Algorithm is used to decrease the performance time to explore an enough solution compared to other meta-heuristic schemes such as GA, PSO and ICA. A three dimensional area is used for clarify the multiple target and placement points, making provision x, y, and z computations in the observation algorithm. A structure of model for the multi-sensor placement problem is proposed: The problem is constructed as an optimization problem with the objective to minimize the cost while covering all multiple target points upon a given probability of observation tolerance.
Real-Time Optimization and Control of Next-Generation Distribution
-Generation Distribution Infrastructure Real-Time Optimization and Control of Next-Generation Distribution developing a system-theoretic distribution network management framework that unifies real-time voltage and Infrastructure | Grid Modernization | NREL Real-Time Optimization and Control of Next
Optimization of cutting parameters for machining time in turning process
Mavliutov, A. R.; Zlotnikov, E. G.
2018-03-01
This paper describes the most effective methods for nonlinear constraint optimization of cutting parameters in the turning process. Among them are Linearization Programming Method with Dual-Simplex algorithm, Interior Point method, and Augmented Lagrangian Genetic Algorithm (ALGA). Every each of them is tested on an actual example – the minimization of production rate in turning process. The computation was conducted in the MATLAB environment. The comparative results obtained from the application of these methods show: The optimal value of the linearized objective and the original function are the same. ALGA gives sufficiently accurate values, however, when the algorithm uses the Hybrid function with Interior Point algorithm, the resulted values have the maximal accuracy.
Optimality with feedback control in relativistic dynamics of a mass point. Part 1
International Nuclear Information System (INIS)
Blaquiere, A.; Pauchard, M.; Tahri-Yousfi, N.; Wickers, D.
1984-01-01
This article is an account of part of a research task currently in progress; it deals with relativistic dynamics of a mass-point from the point of view of the theory of optimal feedback control. In the first part, the theoretical frame is presented with an application to the case of special Relativity. This application shows that the way followed in this article is a natural one for approaching Wave mechanics, and that it closely parallels the way along which Louis de Broglie introduced Wave mechanics [fr
Cooperative scattering of scalar waves by optimized configurations of point scatterers
Schäfer, Frank; Eckert, Felix; Wellens, Thomas
2017-12-01
We investigate multiple scattering of scalar waves by an ensemble of N resonant point scatterers in three dimensions. For up to N = 21 scatterers, we numerically optimize the positions of the individual scatterers, to maximize the total scattering cross section for an incoming plane wave, on the one hand, and to minimize the decay rate associated to a long-lived scattering resonance, on the other. In both cases, the optimum is achieved by configurations where all scatterers are placed on a line parallel to the direction of the incoming plane wave. The associated maximal scattering cross section increases quadratically with the number of scatterers for large N, whereas the minimal decay rate—which is realized by configurations that are not the same as those that maximize the scattering cross section—decreases exponentially as a function of N. Finally, we also analyze the stability of our optimized configurations with respect to small random displacements of the scatterers. These results demonstrate that optimized configurations of scatterers bear a considerable potential for applications such as quantum memories or mirrors consisting of only a few atoms.
Directory of Open Access Journals (Sweden)
R.S. Khakimov
2014-02-01
Full Text Available Historical studies are based on the assumption that there is a reference-starting point of the space-time – the Zero point of coordinate system. Due to the bifurcation in the Zero Point, the course of social processes changes sharply and the probabilistic causality replaces the deterministic one. For this reason, changes occur in the structure of social relations and statehood form as well as in the course of the ethnic processes. In such a way emerges a new discourse of the national behavior. With regard to the history of the Tatars and Tatarstan, such bifurcation points occurred in the periods of the formation: 1 of the Turkic Khaganate, which began to exist from the 6th century onward and became a qualitatively new State system that reformatted old elements in the new matrix introducing a new discourse of behavior; 2 of the Volga-Kama Bulgaria, where the rivers (Kama, Volga, Vyatka became the most important trade routes determining the singularity of this State. Here the nomadic culture was connected with the settled one and Islam became the official religion in 922; 3 and of the Golden Hordе, a powerful State with a remarkable system of communication, migration of huge human resources for thousands of kilometers, and extensive trade, that caused severe “mutations” in the ethnic terms and a huge mixing of ethnic groups. Given the dwelling space of Tatar population and its evolution within Russia, it can be argued that the Zero point of Tatar history, which conveyed the cultural invariants until today, begins in the Golden Horde. Neither in the Turkic khaganate nor in the Bulgar State, but namely in the Golden Horde. Despite the radical changes, the Russian Empire failed to transform the Tatars in the Russians. Therefore, contemporary Tatars preserved the Golden Horde tradition as a cultural invariant.
An efficient global energy optimization approach for robust 3D plane segmentation of point clouds
Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian
2018-03-01
Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)
International Nuclear Information System (INIS)
Zhang, Li; Wang, Yinzhong; Lei, Junqiang; Tian, Jinhui; Zhai, Yanan
2013-01-01
Background: Lung cancer is one of the most common cancer types in the world. An accurate diagnosis of lung cancer is crucial for early treatment and management. Purpose: To perform a comprehensive meta-analysis to evaluate the diagnostic performance of dual time point 18F-fluorodexyglucose position emission tomography/computed tomography (FDG-PET/CT) and single time point 18FDG-PET/CT in the diagnosis of pulmonary nodules. Material and Methods: PubMed (1966-2011.11), EMBASE (1974-2011.11), Web of Science (1972-2011.11), Cochrane Library (-2011.11), and four Chinese databases; CBM (1978-2011.11), CNKI (1994-2011.11), VIP (1989-2011.11), and Wanfang Database (1994-2011.11) were searched. Summary sensitivity, summary specificity, summary diagnostic odds ratios (DOR), and summary positive likelihood ratios (LR+) and negative likelihood ratios (LR-) were obtained using Meta-Disc software. Summary receiver-operating characteristic (SROC) curves were used to evaluate the diagnostic performance of dual time point 18FDG-PET/CT and single time point 18FDG-PET/CT. Results: The inclusion criteria were fulfilled by eight articles, with a total of 415 patients and 430 pulmonary nodules. Compared with the gold standard (pathology or clinical follow-up), the summary sensitivity of dual time point 18FDG-PET/CT was 79% (95%CI, 74.0 - 84.0%), and its summary specificity was 73% (95%CI, 65.0-79.0%); the summary LR+ was 2.61 (95%CI, 1.96-3.47), and the summary LR- was 0.29 (95%CI, 0.21 - 0.41); the summary DOR was 10.25 (95%CI, 5.79 - 18.14), and the area under the SROC curve (AUC) was 0.8244. The summary sensitivity for single time point 18FDG-PET/CT was 77% (95%CI, 71.9 - 82.3%), and its summary specificity was 59% (95%CI, 50.6 - 66.2%); the summary LR+ was 1.97 (95%CI, 1.32 - 2.93), and the summary LR- was 0.37 (95%CI, 0.29 - 0.49); the summary DOR was 6.39 (95%CI, 3.39 - 12.05), and the AUC was 0.8220. Conclusion: The results indicate that dual time point 18FDG-PET/CT and single
Optimal transformations for categorical autoregressive time series
Buuren, S. van
1996-01-01
This paper describes a method for finding optimal transformations for analyzing time series by autoregressive models. 'Optimal' implies that the agreement between the autoregressive model and the transformed data is maximal. Such transformations help 1) to increase the model fit, and 2) to analyze
International Nuclear Information System (INIS)
Holmberg, J.
1997-04-01
The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant
Optimal Frequency Ranges for Sub-Microsecond Precision Pulsar Timing
Lam, Michael Timothy; McLaughlin, Maura; Cordes, James; Chatterjee, Shami; Lazio, Joseph
2018-01-01
Precision pulsar timing requires optimization against measurement errors and astrophysical variance from the neutron stars themselves and the interstellar medium. We investigate optimization of arrival time precision as a function of radio frequency and bandwidth. We find that increases in bandwidth that reduce the contribution from receiver noise are countered by the strong chromatic dependence of interstellar effects and intrinsic pulse-profile evolution. The resulting optimal frequency range is therefore telescope and pulsar dependent. We demonstrate the results for five pulsars included in current pulsar timing arrays and determine that they are not optimally observed at current center frequencies. We also find that arrival-time precision can be improved by increases in total bandwidth. Wideband receivers centered at high frequencies can reduce required overall integration times and provide significant improvements in arrival time uncertainty by a factor of $\\sim$$\\sqrt{2}$ in most cases, assuming a fixed integration time. We also discuss how timing programs can be extended to pulsars with larger dispersion measures through the use of higher-frequency observations.
The Optimal Time of Renovating a Mall
K.C. Wong; George Norman
1994-01-01
This paper presents a maximization model determining the optimal time at which a mall should be renovated. The analysis is constructed on the assumption of a decreasing rental income over time as a mall ages. It is then shown that the optimal renovation period achieves a balance between the marginal cost and benefits of delaying renovation. We show how this balance is affected by changes in the discount rate, net rental incomes, and renovation costs. Numerical simulations are used to demonstr...
Estimation of time-varying reactivity by the H∞ optimal linear filter
International Nuclear Information System (INIS)
Suzuki, Katsuo; Shimazaki, Junya; Watanabe, Koiti
1995-01-01
The problem of estimating the time-varying net reactivity from flux measurements is solved for a point reactor kinetics model using a linear filtering technique in an H ∞ settings. In order to sue this technique, an appropriate dynamical model of the reactivity is constructed that can be embedded into the reactor model as one of its variables. A filter, which minimizes the H ∞ norm of the estimation error power spectrum, operates on neutron density measurements corrupted by noise and provides an estimate of the dynamic net reactivity. Computer simulations are performed to reveal the basic characteristics of the H ∞ optimal filter. The results of the simulation indicate that the filter can be used to determine the time-varying reactivity from neutron density measurements that have been corrupted by noise
Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization
Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.
2009-01-01
We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step, which targets the ?+-center of the next pair of perturbed problems. As for the centering steps, we apply a sharper quadratic convergence result, which leads to a slightly wider neighborhood for th...
Optimized Dose Distribution of Gammamed Plus Vaginal Cylinders
International Nuclear Information System (INIS)
Supe, Sanjay S.; Bijina, T.K.; Varatharaj, C.; Shwetha, B.; Arunkumar, T.; Sathiyan, S.; Ganesh, K.M.; Ravikumar, M.
2009-01-01
Endometrial carcinoma is the most common malignancy arising in the female genital tract. Intracavitary vaginal cuff irradiation may be given alone or with external beam irradiation in patients determined to be at risk for locoregional recurrence. Vaginal cylinders are often used to deliver a brachytherapy dose to the vaginal apex and upper vagina or the entire vaginal surface in the management of postoperative endometrial cancer or cervical cancer. The dose distributions of HDR vaginal cylinders must be evaluated carefully, so that clinical experiences with LDR techniques can be used in guiding optimal use of HDR techniques. The aim of this study was to optimize dose distribution for Gammamed plus vaginal cylinders. Placement of dose optimization points was evaluated for its effect on optimized dose distributions. Two different dose optimization point models were used in this study, namely non-apex (dose optimization points only on periphery of cylinder) and apex (dose optimization points on periphery and along the curvature including the apex points). Thirteen dwell positions were used for the HDR dosimetry to obtain a 6-cm active length. Thus 13 optimization points were available at the periphery of the cylinder. The coordinates of the points along the curvature depended on the cylinder diameters and were chosen for each cylinder so that four points were distributed evenly in the curvature portion of the cylinder. Diameter of vaginal cylinders varied from 2.0 to 4.0 cm. Iterative optimization routine was utilized for all optimizations. The effects of various optimization routines (iterative, geometric, equal times) was studied for the 3.0-cm diameter vaginal cylinder. The effect of source travel step size on the optimized dose distributions for vaginal cylinders was also evaluated. All optimizations in this study were carried for dose of 6 Gy at dose optimization points. For both non-apex and apex models of vaginal cylinders, doses for apex point and three dome
Real-time trajectory optimization on parallel processors
Psiaki, Mark L.
1993-01-01
A parallel algorithm has been developed for rapidly solving trajectory optimization problems. The goal of the work has been to develop an algorithm that is suitable to do real-time, on-line optimal guidance through repeated solution of a trajectory optimization problem. The algorithm has been developed on an INTEL iPSC/860 message passing parallel processor. It uses a zero-order-hold discretization of a continuous-time problem and solves the resulting nonlinear programming problem using a custom-designed augmented Lagrangian nonlinear programming algorithm. The algorithm achieves parallelism of function, derivative, and search direction calculations through the principle of domain decomposition applied along the time axis. It has been encoded and tested on 3 example problems, the Goddard problem, the acceleration-limited, planar minimum-time to the origin problem, and a National Aerospace Plane minimum-fuel ascent guidance problem. Execution times as fast as 118 sec of wall clock time have been achieved for a 128-stage Goddard problem solved on 32 processors. A 32-stage minimum-time problem has been solved in 151 sec on 32 processors. A 32-stage National Aerospace Plane problem required 2 hours when solved on 32 processors. A speed-up factor of 7.2 has been achieved by using 32-nodes instead of 1-node to solve a 64-stage Goddard problem.
International Nuclear Information System (INIS)
He Dakuo; Dong Gang; Wang Fuli; Mao Zhizhong
2011-01-01
A chaotic sequence based differential evolution (DE) approach for solving the dynamic economic dispatch problem (DEDP) with valve-point effect is presented in this paper. The proposed method combines the DE algorithm with the local search technique to improve the performance of the algorithm. DE is the main optimizer, while an approximated model for local search is applied to fine tune in the solution of the DE run. To accelerate convergence of DE, a series of constraints handling rules are adopted. An initial population obtained by using chaotic sequence exerts optimal performance of the proposed algorithm. The combined algorithm is validated for two test systems consisting of 10 and 13 thermal units whose incremental fuel cost function takes into account the valve-point loading effects. The proposed combined method outperforms other algorithms reported in literatures for DEDP considering valve-point effects.
Dew Point modelling using GEP based multi objective optimization
Shroff, Siddharth; Dabhi, Vipul
2013-01-01
Different techniques are used to model the relationship between temperatures, dew point and relative humidity. Gene expression programming is capable of modelling complex realities with great accuracy, allowing at the same time, the extraction of knowledge from the evolved models compared to other learning algorithms. We aim to use Gene Expression Programming for modelling of dew point. Generally, accuracy of the model is the only objective used by selection mechanism of GEP. This will evolve...
International Nuclear Information System (INIS)
Li, Xuewei; Kong, Li; Cheng, Jingjing; Wu, Lei
2015-01-01
The multi-exponential inversion of a NMR relaxation signal plays a key role in core analysis and logging interpretation in the formation of porous media. To find an efficient metod of inverting high-resolution relaxation time spectra rapidly, this paper studies the effect of inversion which is based on the discretization of the original echo in a time domain by using a simulation model. This paper analyzes the ill-condition of discrete equations on the basis of the NMR inversion model and method, determines the appropriate number of discrete echoes and acquires the optimal distribution of discrete echo points by the Lloyd–Max optimal quantization method, in considering the inverse precision and computational complexity comprehensively. The result shows that this method can effectively improve the efficiency of the relaxation time spectra inversion while guaranteeing inversed accuracy. (paper)
DEFF Research Database (Denmark)
Choi, Ui-Min; Blaabjerg, Frede; Lee, Kyo-Beum
2015-01-01
time of small- and medium-voltage vectors. However, if the power factor is lower, there is a limitation to eliminate neutral-point oscillations. In this case, the proposed method can be improved by changing the switching sequence properly. Additionally, a method for neutral-point voltage balancing......This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time offset to the three-phase turn-on times. The proper time offset is simply calculated considering the phase currents and dwell...
Sampled-data and discrete-time H2 optimal control
Trentelman, Harry L.; Stoorvogel, Anton A.
1993-01-01
This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This
Time Eigenstates for Potential Functions without Extremal Points
Directory of Open Access Journals (Sweden)
Gabino Torres-Vega
2013-09-01
Full Text Available In a previous paper, we introduced a way to generate a time coordinate system for classical and quantum systems when the potential function has extremal points. In this paper, we deal with the case in which the potential function has no extremal points at all, and we illustrate the method with the harmonic and linear potentials.
How to decide the optimal scheme and the optimal time for construction
International Nuclear Information System (INIS)
Gjermundsen, T.; Dalsnes, B.; Jensen, T.
1991-01-01
Since the development in Norway began some 105 years ago the mean annual generation has reached approximately 110 TWh. This means that there is a large potential for uprating and refurbishing (U/R). A project undertaken by the Norwegian Water Resources and Energy Administration (NVE) has identified energy resources by means of U/R to about 10 TWh annual generation. One problem in harnessing the potential owned by small and medium sized electricity boards is the lack of simple tools to help us carry out the right decisions. The paper describes a simple model to find the best solution of scheme and the optimal time to start. The principle of present value is used. The main input is: production, price, annual costs of maintenance, the remaining lifetime and the social rate of return. The model calculates the present value of U/R/N for different points of time to start U/R/N. In addition the present value of the existing plant is calculated. Several alternatives can be considered. The best one will be the one which gives the highest present value according to the value of the existing plant. The internal rate of return is also calculated. To be aware of the sensitivity a star diagram is shown. The model gives the opportunity to include environmental charges and the value of effect (peak power). (Author)
Lahanas, M; Baltas, D; Giannouli, S; Milickovic, N; Zamboglou, N
2000-05-01
We have studied the accuracy of statistical parameters of dose distributions in brachytherapy using actual clinical implants. These include the mean, minimum and maximum dose values and the variance of the dose distribution inside the PTV (planning target volume), and on the surface of the PTV. These properties have been studied as a function of the number of uniformly distributed sampling points. These parameters, or the variants of these parameters, are used directly or indirectly in optimization procedures or for a description of the dose distribution. The accurate determination of these parameters depends on the sampling point distribution from which they have been obtained. Some optimization methods ignore catheters and critical structures surrounded by the PTV or alternatively consider as surface dose points only those on the contour lines of the PTV. D(min) and D(max) are extreme dose values which are either on the PTV surface or within the PTV. They must be avoided for specification and optimization purposes in brachytherapy. Using D(mean) and the variance of D which we have shown to be stable parameters, achieves a more reliable description of the dose distribution on the PTV surface and within the PTV volume than does D(min) and D(max). Generation of dose points on the real surface of the PTV is obligatory and the consideration of catheter volumes results in a realistic description of anatomical dose distributions.
Discrete-time inverse optimal control for nonlinear systems
Sanchez, Edgar N
2013-01-01
Discrete-Time Inverse Optimal Control for Nonlinear Systems proposes a novel inverse optimal control scheme for stabilization and trajectory tracking of discrete-time nonlinear systems. This avoids the need to solve the associated Hamilton-Jacobi-Bellman equation and minimizes a cost functional, resulting in a more efficient controller. Design More Efficient Controllers for Stabilization and Trajectory Tracking of Discrete-Time Nonlinear Systems The book presents two approaches for controller synthesis: the first based on passivity theory and the second on a control Lyapunov function (CLF). Th
Time is on my side: optimism in intertemporal choice
Berndsen, M.; van der Pligt, J.
2001-01-01
The present research, using data from 163 undergraduates, examines the role of optimism on time preferences for both losses and gains. It is argued that optimism has asymmetric effects on time preferences for gains versus losses: one reason why decision makers prefer immediate gains is because they
Energy Technology Data Exchange (ETDEWEB)
Holmberg, J [VTT Automation, Espoo (Finland)
1997-04-01
The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant. 62 refs. The thesis includes also five previous publications by author.
Optimized knock-in of point mutations in zebrafish using CRISPR/Cas9.
Prykhozhij, Sergey V; Fuller, Charlotte; Steele, Shelby L; Veinotte, Chansey J; Razaghi, Babak; Robitaille, Johane M; McMaster, Christopher R; Shlien, Adam; Malkin, David; Berman, Jason N
2018-06-14
We have optimized point mutation knock-ins into zebrafish genomic sites using clustered regularly interspaced palindromic repeats (CRISPR)/Cas9 reagents and single-stranded oligodeoxynucleotides. The efficiency of knock-ins was assessed by a novel application of allele-specific polymerase chain reaction and confirmed by high-throughput sequencing. Anti-sense asymmetric oligo design was found to be the most successful optimization strategy. However, cut site proximity to the mutation and phosphorothioate oligo modifications also greatly improved knock-in efficiency. A previously unrecognized risk of off-target trans knock-ins was identified that we obviated through the development of a workflow for correct knock-in detection. Together these strategies greatly facilitate the study of human genetic diseases in zebrafish, with additional applicability to enhance CRISPR-based approaches in other animal model systems.
Directory of Open Access Journals (Sweden)
Islam S.M. Khalil
2016-06-01
Full Text Available Targeted therapy using magnetic microparticles and nanoparticles has the potential to mitigate the negative side-effects associated with conventional medical treatment. Major technological challenges still need to be addressed in order to translate these particles into in vivo applications. For example, magnetic particles need to be navigated controllably in vessels against flowing streams of body fluid. This paper describes the motion control of paramagnetic microparticles in the flowing streams of fluidic channels with time-varying flow rates (maximum flow is 35 ml.hr−1. This control is designed using a magnetic-based proportional-derivative (PD control system to compensate for the time-varying flow inside the channels (with width and depth of 2 mm and 1.5 mm, respectively. First, we achieve point-to-point motion control against and along flow rates of 4 ml.hr−1, 6 ml.hr−1, 17 ml.hr−1, and 35 ml.hr−1. The average speeds of single microparticle (with average diameter of 100 μm against flow rates of 6 ml.hr−1 and 30 ml.hr−1 are calculated to be 45 μm.s−1 and 15 μm.s−1, respectively. Second, we implement PD control with disturbance estimation and compensation. This control decreases the steady-state error by 50%, 70%, 73%, and 78% at flow rates of 4 ml.hr−1, 6 ml.hr−1, 17 ml.hr−1, and 35 ml.hr−1, respectively. Finally, we consider the problem of finding the optimal path (minimal kinetic energy between two points using calculus of variation, against the mentioned flow rates. Not only do we find that an optimal path between two collinear points with the direction of maximum flow (middle of the fluidic channel decreases the rise time of the microparticles, but we also decrease the input current that is supplied to the electromagnetic coils by minimizing the kinetic energy of the microparticles, compared to a PD control with disturbance compensation.
Discount-Optimal Infinite Runs in Priced Timed Automata
DEFF Research Database (Denmark)
Fahrenberg, Uli; Larsen, Kim Guldstrand
2009-01-01
We introduce a new discounting semantics for priced timed automata. Discounting provides a way to model optimal-cost problems for infinite traces and has applications in optimal scheduling and other areas. In the discounting semantics, prices decrease exponentially, so that the contribution...
Distributed optimization system and method
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2003-06-10
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
International Nuclear Information System (INIS)
Kainz, K; Prah, D; Ahunbay, E; Li, X
2014-01-01
Purpose: A novel modulated arc therapy technique, mARC, enables superposition of step-and-shoot IMRT segments upon a subset of the optimization points (OPs) of a continuous-arc delivery. We compare two approaches to mARC planning: one with the number of OPs fixed throughout optimization, and another where the planning system determines the number of OPs in the final plan, subject to an upper limit defined at the outset. Methods: Fixed-OP mARC planning was performed for representative cases using Panther v. 5.01 (Prowess, Inc.), while variable-OP mARC planning used Monaco v. 5.00 (Elekta, Inc.). All Monaco planning used an upper limit of 91 OPs; those OPs with minimal MU were removed during optimization. Plans were delivered, and delivery times recorded, on a Siemens Artiste accelerator using a flat 6MV beam with 300 MU/min rate. Dose distributions measured using ArcCheck (Sun Nuclear Corporation, Inc.) were compared with the plan calculation; the two were deemed consistent if they agreed to within 3.5% in absolute dose and 3.5 mm in distance-to-agreement among > 95% of the diodes within the direct beam. Results: Example cases included a prostate and a head-and-neck planned with a single arc and fraction doses of 1.8 and 2.0 Gy, respectively. Aside from slightly more uniform target dose for the variable-OP plans, the DVHs for the two techniques were similar. For the fixed-OP technique, the number of OPs was 38 and 39, and the delivery time was 228 and 259 seconds, respectively, for the prostate and head-and-neck cases. For the final variable-OP plans, there were 91 and 85 OPs, and the delivery time was 296 and 440 seconds, correspondingly longer than for fixed-OP. Conclusion: For mARC, both the fixed-OP and variable-OP approaches produced comparable-quality plans whose delivery was successfully verified. To keep delivery time per fraction short, a fixed-OP planning approach is preferred
Energy Technology Data Exchange (ETDEWEB)
Kainz, K; Prah, D; Ahunbay, E; Li, X [Medical College of Wisconsin, Milwaukee, WI (United States)
2014-06-01
Purpose: A novel modulated arc therapy technique, mARC, enables superposition of step-and-shoot IMRT segments upon a subset of the optimization points (OPs) of a continuous-arc delivery. We compare two approaches to mARC planning: one with the number of OPs fixed throughout optimization, and another where the planning system determines the number of OPs in the final plan, subject to an upper limit defined at the outset. Methods: Fixed-OP mARC planning was performed for representative cases using Panther v. 5.01 (Prowess, Inc.), while variable-OP mARC planning used Monaco v. 5.00 (Elekta, Inc.). All Monaco planning used an upper limit of 91 OPs; those OPs with minimal MU were removed during optimization. Plans were delivered, and delivery times recorded, on a Siemens Artiste accelerator using a flat 6MV beam with 300 MU/min rate. Dose distributions measured using ArcCheck (Sun Nuclear Corporation, Inc.) were compared with the plan calculation; the two were deemed consistent if they agreed to within 3.5% in absolute dose and 3.5 mm in distance-to-agreement among > 95% of the diodes within the direct beam. Results: Example cases included a prostate and a head-and-neck planned with a single arc and fraction doses of 1.8 and 2.0 Gy, respectively. Aside from slightly more uniform target dose for the variable-OP plans, the DVHs for the two techniques were similar. For the fixed-OP technique, the number of OPs was 38 and 39, and the delivery time was 228 and 259 seconds, respectively, for the prostate and head-and-neck cases. For the final variable-OP plans, there were 91 and 85 OPs, and the delivery time was 296 and 440 seconds, correspondingly longer than for fixed-OP. Conclusion: For mARC, both the fixed-OP and variable-OP approaches produced comparable-quality plans whose delivery was successfully verified. To keep delivery time per fraction short, a fixed-OP planning approach is preferred.
WE-B-304-00: Point/Counterpoint: Biological Dose Optimization
International Nuclear Information System (INIS)
2015-01-01
The ultimate goal of radiotherapy treatment planning is to find a treatment that will yield a high tumor control probability (TCP) with an acceptable normal tissue complication probability (NTCP). Yet most treatment planning today is not based upon optimization of TCPs and NTCPs, but rather upon meeting physical dose and volume constraints defined by the planner. It has been suggested that treatment planning evaluation and optimization would be more effective if they were biologically and not dose/volume based, and this is the claim debated in this month’s Point/Counterpoint. After a brief overview of biologically and DVH based treatment planning by the Moderator Colin Orton, Joseph Deasy (for biological planning) and Charles Mayo (against biological planning) will begin the debate. Some of the arguments in support of biological planning include: this will result in more effective dose distributions for many patients DVH-based measures of plan quality are known to have little predictive value there is little evidence that either D95 or D98 of the PTV is a good predictor of tumor control sufficient validated outcome prediction models are now becoming available and should be used to drive planning and optimization Some of the arguments against biological planning include: several decades of experience with DVH-based planning should not be discarded we do not know enough about the reliability and errors associated with biological models the radiotherapy community in general has little direct experience with side by side comparisons of DVH vs biological metrics and outcomes it is unlikely that a clinician would accept extremely cold regions in a CTV or hot regions in a PTV, despite having acceptable TCP values Learning Objectives: To understand dose/volume based treatment planning and its potential limitations To understand biological metrics such as EUD, TCP, and NTCP To understand biologically based treatment planning and its potential limitations
Directory of Open Access Journals (Sweden)
Yue You
2017-01-01
Full Text Available A time and covariance threshold triggered optimal maneuver planning method is proposed for orbital rendezvous using angles-only navigation (AON. In the context of Yamanaka-Ankersen orbital relative motion equations, the square root unscented Kalman filter (SRUKF AON algorithm is developed to compute the relative state estimations from a low-volume/mass, power saving, and low-cost optical/infrared camera’s observations. Multi-impulsive Hill guidance law is employed in closed-loop linear covariance analysis model, based on which the quantitative relative position robustness and relative velocity robustness index are defined. By balancing fuel consumption, relative position robustness, and relative velocity robustness, we developed a time and covariance threshold triggered two-level optimal maneuver planning method, showing how these results correlate to past methods and missions and how they could potentially influence future ones. Numerical simulation proved that it is feasible to control the spacecraft with a two-line element- (TLE- level uncertain, 34.6% of range, initial relative state to a 100 m v-bar relative station keeping point, at where the trajectory dispersion reduces to 3.5% of range, under a 30% data gap per revolution on account of the eclipse. Comparing with the traditional time triggered maneuver planning method, the final relative position accuracy is improved by one order and the relative trajectory robustness and collision probability are obviously improved and reduced, respectively.
DEFF Research Database (Denmark)
Choi, Uimin; Lee, Kyo-Beum; Blaabjerg, Frede
2013-01-01
This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time-offset to the three phase turn-on times. The proper time-offset is simply calculated considering the phase currents and dwell...
DEFF Research Database (Denmark)
Dollerup, Niels; Jepsen, Michael S.; Damkilde, Lars
2013-01-01
The artide describes a robust and effective implementation of the interior point optimization algorithm. The adopted method includes a precalculation step, which reduces the number of variables by fulfilling the equilibrium equations a priori. This work presents an improved implementation of the ...
Optimal Real-time Dispatch for Integrated Energy Systems
Energy Technology Data Exchange (ETDEWEB)
Firestone, Ryan Michael [Univ. of California, Berkeley, CA (United States)
2007-05-31
This report describes the development and application of a dispatch optimization algorithm for integrated energy systems (IES) comprised of on-site cogeneration of heat and electricity, energy storage devices, and demand response opportunities. This work is intended to aid commercial and industrial sites in making use of modern computing power and optimization algorithms to make informed, near-optimal decisions under significant uncertainty and complex objective functions. The optimization algorithm uses a finite set of randomly generated future scenarios to approximate the true, stochastic future; constraints are included that prevent solutions to this approximate problem from deviating from solutions to the actual problem. The algorithm is then expressed as a mixed integer linear program, to which a powerful commercial solver is applied. A case study of United States Postal Service Processing and Distribution Centers (P&DC) in four cities and under three different electricity tariff structures is conducted to (1) determine the added value of optimal control to a cogeneration system over current, heuristic control strategies; (2) determine the value of limited electric load curtailment opportunities, with and without cogeneration; and (3) determine the trade-off between least-cost and least-carbon operations of a cogeneration system. Key results for the P&DC sites studied include (1) in locations where the average electricity and natural gas prices suggest a marginally profitable cogeneration system, optimal control can add up to 67% to the value of the cogeneration system; optimal control adds less value in locations where cogeneration is more clearly profitable; (2) optimal control under real-time pricing is (a) more complicated than under typical time-of-use tariffs and (b) at times necessary to make cogeneration economic at all; (3) limited electric load curtailment opportunities can be more valuable as a compliment to the cogeneration system than alone; and
Point splitting in a curved space-time background
International Nuclear Information System (INIS)
Liggatt, P.A.J.; Macfarlane, A.J.
1979-01-01
A prescription is given for point splitting in a curved space-time background which is a natural generalization of that familiar in quantum electrodynamics and Yang-Mills theory. It is applied (to establish its validity) to the verification of the gravitational anomaly in the divergence of a fermion axial current. Notable features of the prescription are that it defines a point-split current that can be differentiated straightforwardly, and that it involves a natural way of averaging (four-dimensionally) over the directions of point splitting. The method can extend directly from the spin-1/2 fermion case treated to other cases, e.g., to spin-3/2 Rarita-Schwinger fermions. (author)
Anastopoulos, George; Chissas, Dionisios; Dourountakis, Joseph; Ntagiopoulos, Panagiotis G; Magnisalis, Evaggelos; Asimakopoulos, Antonios; Xenakis, Theodore A
2010-03-01
Optimal entry point for antegrade femoral intramedullary nailing (IMN) remains controversial in the current medical literature. The definition of an ideal entry point for femoral IMN would implicate a tenseless introduction of the implant into the canal with anatomical alignment of the bone fragments. This study was undertaken in order to investigate possible existing relationships between the true 3D geometric parameters of the femur and the location of the optimum entry point. A sample population of 22 cadaveric femurs was used (mean age=51.09+/-14.82 years). Computed-tomography sections every 0.5mm for the entire length of femurs were produced. These sections were subsequently reconstructed to generate solid computer models of the external anatomy and medullary canal of each femur. Solid models of all femurs were subjected to a series of geometrical manipulations and computations using standard computer-aided-design tools. In the sagittal plane, the optimum entry point always lied a few millimeters behind the femoral neck axis (mean=3.5+/-1.5mm). In the coronal plane the optimum entry point lied at a location dependent on the femoral neck-shaft angle. Linear regression on the data showed that the optimal entry point is clearly correlated to the true 3D femoral neck-shaft angle (R(2)=0.7310) and the projected femoral neck-shaft angle (R(2)=0.6289). Anatomical parameters of the proximal femur, such as the varus-valgus angulation, are key factors in the determination of optimal entry point for nailing. The clinical relevance of the results is that in varus hips (neck-shaft angle shaft angle between 120 degrees and 130 degrees , the optimal entry point lies just medially to the trochanter tip (at the piriformis fossa) and the use of stiff implants is safe. In hips with neck-shaft angle over 130 degrees the anatomical axis of the canal is medially to the base of the neck, in a "restricted area". In these cases the entry point should be located at the insertion of the
DEFF Research Database (Denmark)
Structure from Motion (SFM) systems are composed of cameras and structure in the form of 3D points and other features. It is most often that the structure components outnumber the cameras by a great margin. It is not uncommon to have a configuration with 3 cameras observing more than 500 3D points...... an overview of existing triangulation methods with emphasis on performance versus optimality, and will suggest a fast triangulation algorithm based on linear constraints. The structure and camera motion estimation in a SFM system is based on the minimization of some norm of the reprojection error between...
DEFF Research Database (Denmark)
Dollerup, Niels; Jepsen, Michael S.; Frier, Christian
2014-01-01
A robust and effective finite element based implementation of lower bound limit state analysis applying an interior point formulation is presented in this paper. The lower bound formulation results in a convex optimization problem consisting of a number of linear constraints from the equilibrium...
Optimal control of nonlinear continuous-time systems in strict-feedback form.
Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani
2015-10-01
This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.
System-level power optimization for real-time distributed embedded systems
Luo, Jiong
Power optimization is one of the crucial design considerations for modern electronic systems. In this thesis, we present several system-level power optimization techniques for real-time distributed embedded systems, based on dynamic voltage scaling, dynamic power management, and management of peak power and variance of the power profile. Dynamic voltage scaling has been widely acknowledged as an important and powerful technique to trade off dynamic power consumption and delay. Efficient dynamic voltage scaling requires effective variable-voltage scheduling mechanisms that can adjust voltages and clock frequencies adaptively based on workloads and timing constraints. For this purpose, we propose static variable-voltage scheduling algorithms utilizing criticalpath driven timing analysis for the case when tasks are assumed to have uniform switching activities, as well as energy-gradient driven slack allocation for a more general scenario. The proposed techniques can achieve closeto-optimal power savings with very low computational complexity, without violating any real-time constraints. We also present algorithms for power-efficient joint scheduling of multi-rate periodic task graphs along with soft aperiodic tasks. The power issue is addressed through both dynamic voltage scaling and power management. Periodic task graphs are scheduled statically. Flexibility is introduced into the static schedule to allow the on-line scheduler to make local changes to PE schedules through resource reclaiming and slack stealing, without interfering with the validity of the global schedule. We provide a unified framework in which the response times of aperiodic tasks and power consumption are dynamically optimized simultaneously. Interconnection network fabrics point to a new generation of power-efficient and scalable interconnection architectures for distributed embedded systems. As the system bandwidth continues to increase, interconnection networks become power/energy limited as
A travel time forecasting model based on change-point detection method
LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei
2017-06-01
Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.
Research on the time optimization model algorithm of Customer Collaborative Product Innovation
Directory of Open Access Journals (Sweden)
Guodong Yu
2014-01-01
Full Text Available Purpose: To improve the efficiency of information sharing among the innovation agents of customer collaborative product innovation and shorten the product design cycle, an improved genetic annealing algorithm of the time optimization was presented. Design/methodology/approach: Based on the analysis of the objective relationship between the design tasks, the paper takes job shop problems for machining model and proposes the improved genetic algorithm to solve the problems, which is based on the niche technology and thus a better product collaborative innovation design time schedule is got to improve the efficiency. Finally, through the collaborative innovation design of a certain type of mobile phone, the proposed model and method were verified to be correct and effective. Findings and Originality/value: An algorithm with obvious advantages in terms of searching capability and optimization efficiency of customer collaborative product innovation was proposed. According to the defects of the traditional genetic annealing algorithm, the niche genetic annealing algorithm was presented. Firstly, it avoided the effective gene deletions at the early search stage and guaranteed the diversity of solution; Secondly, adaptive double point crossover and swap mutation strategy were introduced to overcome the defects of long solving process and easily converging local minimum value due to the fixed crossover and mutation probability; Thirdly, elite reserved strategy was imported that optimal solution missing was avoided effectively and evolution speed was accelerated. Originality/value: Firstly, the improved genetic simulated annealing algorithm overcomes some defects such as effective gene easily lost in early search. It is helpful to shorten the calculation process and improve the accuracy of the convergence value. Moreover, it speeds up the evolution and ensures the reliability of the optimal solution. Meanwhile, it has obvious advantages in efficiency of
Optimal Piecewise Linear Basis Functions in Two Dimensions
Energy Technology Data Exchange (ETDEWEB)
Brooks III, E D; Szoke, A
2009-01-26
We use a variational approach to optimize the center point coefficients associated with the piecewise linear basis functions introduced by Stone and Adams [1], for polygonal zones in two Cartesian dimensions. Our strategy provides optimal center point coefficients, as a function of the location of the center point, by minimizing the error induced when the basis function interpolation is used for the solution of the time independent diffusion equation within the polygonal zone. By using optimal center point coefficients, one expects to minimize the errors that occur when these basis functions are used to discretize diffusion equations, or transport equations in optically thick zones (where they approach the solution of the diffusion equation). Our optimal center point coefficients satisfy the requirements placed upon the basis functions for any location of the center point. We also find that the location of the center point can be optimized, but this requires numerical calculations. Curiously, the optimum center point location is independent of the values of the dependent variable on the corners only for quadrilaterals.
Optimal control for parabolic-hyperbolic system with time delay
International Nuclear Information System (INIS)
Kowalewski, A.
1985-07-01
In this paper we consider an optimal control problem for a system described by a linear partial differential equation of the parabolic-hyperbolic type with time delay in the state. The right-hand side of this equation and the initial conditions are not continuous functions usually, but they are measurable functions belonging to L 2 or Lsup(infinity) spaces. Therefore, the solution of this equation is given by a certain Sobolev space. The time delay in the state is constant, but it can be also a function of time. The control time T is fixed in our problem. Making use of the Milutin-Dubovicki theorem, necessary and sufficient conditions of optimality with the quadratic performance functional and constrained control are derived for the Dirichlet problem. The flow chart of the algorithm which can be used in the numerical solving of certain optimization problems for distributed systems is also presented. (author)
Robust Optimization for Time-Cost Tradeoff Problem in Construction Projects
Li, Ming; Wu, Guangdong
2014-01-01
Construction projects are generally subject to uncertainty, which influences the realization of time-cost tradeoff in project management. This paper addresses a time-cost tradeoff problem under uncertainty, in which activities in projects can be executed in different construction modes corresponding to specified time and cost with interval uncertainty. Based on multiobjective robust optimization method, a robust optimization model for time-cost tradeoff problem is developed. In order to illus...
Nonlinear triple-point problems on time scales
Directory of Open Access Journals (Sweden)
Douglas R. Anderson
2004-04-01
Full Text Available We establish the existence of multiple positive solutions to the nonlinear second-order triple-point boundary-value problem on time scales, $$displaylines{ u^{Delta abla}(t+h(tf(t,u(t=0, cr u(a=alpha u(b+delta u^Delta(a,quad eta u(c+gamma u^Delta(c=0 }$$ for $tin[a,c]subsetmathbb{T}$, where $mathbb{T}$ is a time scale, $eta, gamma, deltage 0$ with $Beta+gamma>0$, $0
Optimal and sub-optimal post-detection timing estimators for PET
International Nuclear Information System (INIS)
Hero, A.O.; Antoniadis, N.; Clinthorne, N.; Rogers, W.L.; Hutchins, G.D.
1990-01-01
In this paper the authors derive linear and non-linear approximations to the post-detection likelihood function for scintillator interaction time in nuclear particle detection systems. The likelihood function is the optimal statistic for performing detection and estimation of scintillator events and event times. The authors derive the likelihood function approximations from a statistical model for the post-detection waveform which is common in the optical communications literature and takes account of finite detector bandwidth, random gains, and thermal noise. They then present preliminary simulation results for the associated approximate maximum likelihood timing estimators which indicate that significant MSE improvements may be achieved for low post-detection signal-to-noise ratio
Optimal replacement time estimation for machines and equipment based on cost function
Directory of Open Access Journals (Sweden)
J. Šebo
2013-01-01
Full Text Available The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables. Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is sufficient to use simpler models. In addition to the testing of models we developed the method (tested on selected simple model which enable us in actual real time (with limited data set to indicate the optimal replacement time. The indicated time moment is close enough to the optimal replacement time t*.
Directory of Open Access Journals (Sweden)
Hong-Yun Zhang
2012-09-01
Full Text Available Quantum-behaved particle swarm optimization (QPSO is an efficient and powerful population-based optimization technique, which is inspired by the conventional particle swarm optimization (PSO and quantum mechanics theories. In this paper, an improved QPSO named SQPSO is proposed, which combines QPSO with a selective probability operator to solve the economic dispatch (ED problems with valve-point effects and multiple fuel options. To show the performance of the proposed SQPSO, it is tested on five standard benchmark functions and two ED benchmark problems, including a 40-unit ED problem with valve-point effects and a 10-unit ED problem with multiple fuel options. The results are compared with differential evolution (DE, particle swarm optimization (PSO and basic QPSO, as well as a number of other methods reported in the literature in terms of solution quality, convergence speed and robustness. The simulation results confirm that the proposed SQPSO is effective and reliable for both function optimization and ED problems.
Optimization and control of a continuous polymerization reactor
Directory of Open Access Journals (Sweden)
L. A. Alvarez
2012-12-01
Full Text Available This work studies the optimization and control of a styrene polymerization reactor. The proposed strategy deals with the case where, because of market conditions and equipment deterioration, the optimal operating point of the continuous reactor is modified significantly along the operation time and the control system has to search for this optimum point, besides keeping the reactor system stable at any possible point. The approach considered here consists of three layers: the Real Time Optimization (RTO, the Model Predictive Control (MPC and a Target Calculation (TC that coordinates the communication between the two other layers and guarantees the stability of the whole structure. The proposed algorithm is simulated with the phenomenological model of a styrene polymerization reactor, which has been widely used as a benchmark for process control. The complete optimization structure for the styrene process including disturbances rejection is developed. The simulation results show the robustness of the proposed strategy and the capability to deal with disturbances while the economic objective is optimized.
Optimizing the switching time for 400 kV SF6 circuit breakers
Ciulica, D.
2018-01-01
This paper presents real-time voltage and current analysis for optimizing the wave switching point of the circuit breaker SF6. Circuit Breaker plays an important role in power systems. It provides protection for equipment in embedded stations in transport networks. SF6 Circuit Breaker is very important equipment in Power Systems, which is used for up to 400 kV due to its excellent performance. The controlled switching is used to eliminate transient modes and electrodynamic and dielectric charges in the network at manual switching of capacitor, shunt reactors and power transformers. These effects reduce the reliability and lifetime of the equipment installed on the network, or may lead to erroneous protection.
Optimal Robust Fault Detection for Linear Discrete Time Systems
Directory of Open Access Journals (Sweden)
Nike Liu
2008-01-01
Full Text Available This paper considers robust fault-detection problems for linear discrete time systems. It is shown that the optimal robust detection filters for several well-recognized robust fault-detection problems, such as ℋ−/ℋ∞, ℋ2/ℋ∞, and ℋ∞/ℋ∞ problems, are the same and can be obtained by solving a standard algebraic Riccati equation. Optimal filters are also derived for many other optimization criteria and it is shown that some well-studied and seeming-sensible optimization criteria for fault-detection filter design could lead to (optimal but useless fault-detection filters.
Optimization of recurrent neural networks for time series modeling
DEFF Research Database (Denmark)
Pedersen, Morten With
1997-01-01
The present thesis is about optimization of recurrent neural networks applied to time series modeling. In particular is considered fully recurrent networks working from only a single external input, one layer of nonlinear hidden units and a li near output unit applied to prediction of discrete time...... series. The overall objective s are to improve training by application of second-order methods and to improve generalization ability by architecture optimization accomplished by pruning. The major topics covered in the thesis are: 1. The problem of training recurrent networks is analyzed from a numerical...... of solution obtained as well as computation time required. 3. A theoretical definition of the generalization error for recurrent networks is provided. This definition justifies a commonly adopted approach for estimating generalization ability. 4. The viability of pruning recurrent networks by the Optimal...
On the application of Discrete Time Optimal Control Concepts to ...
African Journals Online (AJOL)
On the application of Discrete Time Optimal Control Concepts to Economic Problems. ... Journal of the Nigerian Association of Mathematical Physics ... Abstract. An extension of the use of the maximum principle to solve Discrete-time Optimal Control Problems (DTOCP), in which the state equations are in the form of general ...
Optimal Sunshade Configurations for Space-Based Geoengineering near the Sun-Earth L1 Point.
Directory of Open Access Journals (Sweden)
Joan-Pau Sánchez
Full Text Available Within the context of anthropogenic climate change, but also considering the Earth's natural climate variability, this paper explores the speculative possibility of large-scale active control of the Earth's radiative forcing. In particular, the paper revisits the concept of deploying a large sunshade or occulting disk at a static position near the Sun-Earth L1 Lagrange equilibrium point. Among the solar radiation management methods that have been proposed thus far, space-based concepts are generally seen as the least timely, albeit also as one of the most efficient. Large occulting structures could potentially offset all of the global mean temperature increase due to greenhouse gas emissions. This paper investigates optimal configurations of orbiting occulting disks that not only offset a global temperature increase, but also mitigate regional differences such as latitudinal and seasonal difference of monthly mean temperature. A globally resolved energy balance model is used to provide insights into the coupling between the motion of the occulting disks and the Earth's climate. This allows us to revise previous studies, but also, for the first time, to search for families of orbits that improve the efficiency of occulting disks at offsetting climate change on both global and regional scales. Although natural orbits exist near the L1 equilibrium point, their period does not match that required for geoengineering purposes, thus forced orbits were designed that require small changes to the disk attitude in order to control its motion. Finally, configurations of two occulting disks are presented which provide the same shading area as previously published studies, but achieve reductions of residual latitudinal and seasonal temperature changes.
Optimal Sunshade Configurations for Space-Based Geoengineering near the Sun-Earth L1 Point.
Sánchez, Joan-Pau; McInnes, Colin R
2015-01-01
Within the context of anthropogenic climate change, but also considering the Earth's natural climate variability, this paper explores the speculative possibility of large-scale active control of the Earth's radiative forcing. In particular, the paper revisits the concept of deploying a large sunshade or occulting disk at a static position near the Sun-Earth L1 Lagrange equilibrium point. Among the solar radiation management methods that have been proposed thus far, space-based concepts are generally seen as the least timely, albeit also as one of the most efficient. Large occulting structures could potentially offset all of the global mean temperature increase due to greenhouse gas emissions. This paper investigates optimal configurations of orbiting occulting disks that not only offset a global temperature increase, but also mitigate regional differences such as latitudinal and seasonal difference of monthly mean temperature. A globally resolved energy balance model is used to provide insights into the coupling between the motion of the occulting disks and the Earth's climate. This allows us to revise previous studies, but also, for the first time, to search for families of orbits that improve the efficiency of occulting disks at offsetting climate change on both global and regional scales. Although natural orbits exist near the L1 equilibrium point, their period does not match that required for geoengineering purposes, thus forced orbits were designed that require small changes to the disk attitude in order to control its motion. Finally, configurations of two occulting disks are presented which provide the same shading area as previously published studies, but achieve reductions of residual latitudinal and seasonal temperature changes.
Change detection in polarimetric SAR data over several time points
DEFF Research Database (Denmark)
Conradsen, Knut; Nielsen, Allan Aasbjerg; Skriver, Henning
2014-01-01
A test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution is introduced. The test statistic is applied successfully to detect change in C-band EMISAR polarimetric SAR data over four time points.......A test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution is introduced. The test statistic is applied successfully to detect change in C-band EMISAR polarimetric SAR data over four time points....
OPTIMIZATION OF RESULTS AND TREATMENT TIMING OF DEEP DERMAL BURNS IN CHILDREN
Directory of Open Access Journals (Sweden)
Konstantin Aleksandrovich Afonichev
2014-06-01
Full Text Available Untreated deep dermal burns in children are the cause of long-term treatment and severe cicatricial deformities, resulting in poor cosmetic results and greatly impairing functional outcome. The problem of optimizing the results and timing of treatment of deep burns in children in recent years has become particularly urgent. We observed 1853 children with III-A degree burns. Some of the children's burns healed spontaneously, which led to the development of scar deformities during the first six months after injury. Risk factors for their development, depending on the patient's age and location of the lesion, are pointed out. Other children underwent early tangential excision of eschar. The analysis of the treatment results showed that the use of early surgery in children with deep dermal burns can reduce treatment time, as well as significantly to improve the cosmetic and functional outcomes of trauma.
Space-time topology optimization for one-dimensional wave propagation
DEFF Research Database (Denmark)
Jensen, Jakob Søndergaard
2009-01-01
-dimensional transient wave propagation in an elastic rod with time dependent Young's modulus. By two simulation examples it is demonstrated how dynamic structures can display rich dynamic behavior such as wavenumber/frequency shifts and lack of energy conservation. The optimization method's potential for creating...... structures with novel dynamic behavior is illustrated by a simple example; it is shown that an elastic rod in which the optimized stiffness distribution is allowed to vary in time can be much more efficient in prohibiting wave propagation compared to a static bandgap structure. Optimized designs in form...... of spatio-temporal laminates and checkerboards are generated and discussed. The example lays the foundation for creating designs with more advanced functionalities in future work....
Real-time motion-adaptive-optimization (MAO) in TomoTherapy
Energy Technology Data Exchange (ETDEWEB)
Lu Weiguo; Chen Mingli; Ruchala, Kenneth J; Chen Quan; Olivera, Gustavo H [TomoTherapy Inc., 1240 Deming Way, Madison, WI (United States); Langen, Katja M; Kupelian, Patrick A [MD Anderson Cancer Center-Orlando, Orlando, FL (United States)], E-mail: wlu@tomotherapy.com
2009-07-21
IMRT delivery follows a planned leaf sequence, which is optimized before treatment delivery. However, it is hard to model real-time variations, such as respiration, in the planning procedure. In this paper, we propose a negative feedback system of IMRT delivery that incorporates real-time optimization to account for intra-fraction motion. Specifically, we developed a feasible workflow of real-time motion-adaptive-optimization (MAO) for TomoTherapy delivery. TomoTherapy delivery is characterized by thousands of projections with a fast projection rate and ultra-fast binary leaf motion. The technique of MAO-guided delivery calculates (i) the motion-encoded dose that has been delivered up to any given projection during the delivery and (ii) the future dose that will be delivered based on the estimated motion probability and future fluence map. These two pieces of information are then used to optimize the leaf open time of the upcoming projection right before its delivery. It consists of several real-time procedures, including 'motion detection and prediction', 'delivered dose accumulation', 'future dose estimation' and 'projection optimization'. Real-time MAO requires that all procedures are executed in time less than the duration of a projection. We implemented and tested this technique using a TomoTherapy (registered) research system. The MAO calculation took about 100 ms per projection. We calculated and compared MAO-guided delivery with two other types of delivery, motion-without-compensation delivery (MD) and static delivery (SD), using simulated 1D cases, real TomoTherapy plans and the motion traces from clinical lung and prostate patients. The results showed that the proposed technique effectively compensated for motion errors of all test cases. Dose distributions and DVHs of MAO-guided delivery approached those of SD, for regular and irregular respiration with a peak-to-peak amplitude of 3 cm, and for medium and large
Real-time motion-adaptive-optimization (MAO) in TomoTherapy
International Nuclear Information System (INIS)
Lu Weiguo; Chen Mingli; Ruchala, Kenneth J; Chen Quan; Olivera, Gustavo H; Langen, Katja M; Kupelian, Patrick A
2009-01-01
IMRT delivery follows a planned leaf sequence, which is optimized before treatment delivery. However, it is hard to model real-time variations, such as respiration, in the planning procedure. In this paper, we propose a negative feedback system of IMRT delivery that incorporates real-time optimization to account for intra-fraction motion. Specifically, we developed a feasible workflow of real-time motion-adaptive-optimization (MAO) for TomoTherapy delivery. TomoTherapy delivery is characterized by thousands of projections with a fast projection rate and ultra-fast binary leaf motion. The technique of MAO-guided delivery calculates (i) the motion-encoded dose that has been delivered up to any given projection during the delivery and (ii) the future dose that will be delivered based on the estimated motion probability and future fluence map. These two pieces of information are then used to optimize the leaf open time of the upcoming projection right before its delivery. It consists of several real-time procedures, including 'motion detection and prediction', 'delivered dose accumulation', 'future dose estimation' and 'projection optimization'. Real-time MAO requires that all procedures are executed in time less than the duration of a projection. We implemented and tested this technique using a TomoTherapy (registered) research system. The MAO calculation took about 100 ms per projection. We calculated and compared MAO-guided delivery with two other types of delivery, motion-without-compensation delivery (MD) and static delivery (SD), using simulated 1D cases, real TomoTherapy plans and the motion traces from clinical lung and prostate patients. The results showed that the proposed technique effectively compensated for motion errors of all test cases. Dose distributions and DVHs of MAO-guided delivery approached those of SD, for regular and irregular respiration with a peak-to-peak amplitude of 3 cm, and for medium and large prostate motions. The results conceptually
Pointing Device Performance in Steering Tasks.
Senanayake, Ransalu; Goonetilleke, Ravindra S
2016-06-01
Use of touch-screen-based interactions is growing rapidly. Hence, knowing the maneuvering efficacy of touch screens relative to other pointing devices is of great importance in the context of graphical user interfaces. Movement time, accuracy, and user preferences of four pointing device settings were evaluated on a computer with 14 participants aged 20.1 ± 3.13 years. It was found that, depending on the difficulty of the task, the optimal settings differ for ballistic and visual control tasks. With a touch screen, resting the arm increased movement time for steering tasks. When both performance and comfort are considered, whether to use a mouse or a touch screen for person-computer interaction depends on the steering difficulty. Hence, a input device should be chosen based on the application, and should be optimized to match the graphical user interface. © The Author(s) 2016.
Directory of Open Access Journals (Sweden)
Haoxiang He
2016-01-01
Full Text Available In view of the disadvantages such as higher yield stress and inadequate adjustability, a combined low yield point steel plate damper involving low yield point steel plates and common steel plates is proposed. Three types of combined plate dampers with new hollow shapes are proposed, and the specific forms include interior hollow, boundary hollow, and ellipse hollow. The “maximum stiffness” and “full stress state” are used as the optimization objectives, and the topology optimization of different hollow forms by alternating optimization method is to obtain the optimal shape. Various combined steel plate dampers are calculated by finite element simulation, the results indicate that the initial stiffness of the boundary optimized damper and interior optimized damper is lager, the hysteresis curves are full, and there is no stress concentration. These two types of optimization models made in different materials rations are studied by numerical simulation, and the adjustability of yield stress of these combined dampers is verified. The nonlinear dynamic responses, seismic capacity, and damping effect of steel frame structures with different combined dampers are analyzed. The results show that the boundary optimized damper has better energy-dissipation capacity and is suitable for engineering application.
Real-time traffic signal optimization model based on average delay time per person
Directory of Open Access Journals (Sweden)
Pengpeng Jiao
2015-10-01
Full Text Available Real-time traffic signal control is very important for relieving urban traffic congestion. Many existing traffic control models were formulated using optimization approach, with the objective functions of minimizing vehicle delay time. To improve people’s trip efficiency, this article aims to minimize delay time per person. Based on the time-varying traffic flow data at intersections, the article first fits curves of accumulative arrival and departure vehicles, as well as the corresponding functions. Moreover, this article transfers vehicle delay time to personal delay time using average passenger load of cars and buses, employs such time as the objective function, and proposes a signal timing optimization model for intersections to achieve real-time signal parameters, including cycle length and green time. This research further implements a case study based on practical data collected at an intersection in Beijing, China. The average delay time per person and queue length are employed as evaluation indices to show the performances of the model. The results show that the proposed methodology is capable of improving traffic efficiency and is very effective for real-world applications.
OPTIMIZING THE DISTRIBUTION OF TIE POINTS FOR THE BUNDLE ADJUSTMENT OF HRSC IMAGE MOSAICS
Directory of Open Access Journals (Sweden)
J. Bostelmann
2017-07-01
Full Text Available For a systematic mapping of the Martian surface, the Mars Express orbiter is equipped with a multi-line scanner: Since the beginning of 2004 the High Resolution Stereo Camera (HRSC regularly acquires long image strips. By now more than 4,000 strips covering nearly the whole planet are available. Due to the nine channels, each with different viewing direction, and partly with different optical filters, each strip provides 3D and color information and allows the generation of digital terrain models (DTMs and orthophotos. To map larger regions, neighboring HRSC strips can be combined to build DTM and orthophoto mosaics. The global mapping scheme Mars Chart 30 is used to define the extent of these mosaics. In order to avoid unreasonably large data volumes, each MC-30 tile is divided into two parts, combining about 90 strips each. To ensure a seamless fit of these strips, several radiometric and geometric corrections are applied in the photogrammetric process. A simultaneous bundle adjustment of all strips as a block is carried out to estimate their precise exterior orientation. Because size, position, resolution and image quality of the strips in these blocks are heterogeneous, also the quality and distribution of the tie points vary. In absence of ground control points, heights of a global terrain model are used as reference information, and for this task a regular distribution of these tie points is preferable. Besides, their total number should be limited because of computational reasons. In this paper, we present an algorithm, which optimizes the distribution of tie points under these constraints. A large number of tie points used as input is reduced without affecting the geometric stability of the block by preserving connections between strips. This stability is achieved by using a regular grid in object space and discarding, for each grid cell, points which are redundant for the block adjustment. The set of tie points, filtered by the
Introduction to optimal control theory
International Nuclear Information System (INIS)
Agrachev, A.A.
2002-01-01
These are lecture notes of the introductory course in Optimal Control theory treated from the geometric point of view. Optimal Control Problem is reduced to the study of controls (and corresponding trajectories) leading to the boundary of attainable sets. We discuss Pontryagin Maximum Principle, basic existence results, and apply these tools to concrete simple optimal control problems. Special sections are devoted to the general theory of linear time-optimal problems and linear-quadratic problems. (author)
Simulation-based robust optimization for signal timing and setting.
2009-12-30
The performance of signal timing plans obtained from traditional approaches for : pre-timed (fixed-time or actuated) control systems is often unstable under fluctuating traffic : conditions. This report develops a general approach for optimizing the ...
Application of Minimum-time Optimal Control System in Buck-Boost Bi-linear Converters
Directory of Open Access Journals (Sweden)
S. M. M. Shariatmadar
2017-08-01
Full Text Available In this study, the theory of minimum-time optimal control system in buck-boost bi-linear converters is described, so that output voltage regulation is carried out within minimum time. For this purpose, the Pontryagin's Minimum Principle is applied to find optimal switching level applying minimum-time optimal control rules. The results revealed that by utilizing an optimal switching level instead of classical switching patterns, output voltage regulation will be carried out within minimum time. However, transient energy index of increased overvoltage significantly reduces in order to attain minimum time optimal control in reduced output load. The laboratory results were used in order to verify numerical simulations.
DEFF Research Database (Denmark)
Demenikov, Mads
2011-01-01
to optimization results based on full-reference image measures of restored images. In comparison with full-reference measures, the kurtosis measure is fast to compute and requires no images, noise distributions, or alignment of restored images, but only the signal-to-noise-ratio. © 2011 Optical Society of America.......I propose a novel, but yet simple, no-reference, objective image quality measure based on the kurtosis of the restored point spread function. Using this measure, I optimize several phase masks for extended-depth-of-field in hybrid imaging systems and obtain results that are identical...
Directory of Open Access Journals (Sweden)
Luigi Piegari
2015-04-01
Full Text Available The power extracted from PV arrays is usually maximized using maximum power point tracking algorithms. One of the most widely used techniques is the perturb & observe algorithm, which periodically perturbs the operating point of the PV array, sometime with an adaptive perturbation step, and compares the PV power before and after the perturbation. This paper analyses the most suitable perturbation step to optimize maximum power point tracking performance and suggests a design criterion to select the parameters of the controller. Using this proposed adaptive step, the MPPT perturb & observe algorithm achieves an excellent dynamic response by adapting the perturbation step to the actual operating conditions of the PV array. The proposed algorithm has been validated and tested in a laboratory using a dual input inductor push-pull converter. This particular converter topology is an efficient interface to boost the low voltage of PV arrays and effectively control the power flow when input or output voltages are variable. The experimental results have proved the superiority of the proposed algorithm in comparison of traditional perturb & observe and incremental conductance techniques.
Optimization of extended propulsion time nuclear-electric propulsion trajectories
Sauer, C. G., Jr.
1981-01-01
This paper presents the methodology used in optimizing extended propulsion time NEP missions considering realistic thruster lifetime constraints. These missions consist of a powered spiral escape from a 700-km circular orbit at the earth, followed by a powered heliocentric transfer with an optimized coast phase, and terminating in a spiral capture phase at the target planet. This analysis is most applicable to those missions with very high energy requirements such as outer planet orbiter missions or sample return missions where the total propulsion time could greatly exceed the expected lifetime of an individual thruster. This methodology has been applied to the investigation of NEP missions to the outer planets where examples are presented of both constrained and optimized trajectories.
Optimal Conditional Reachability for Multi-Priced Timed Automata
DEFF Research Database (Denmark)
Larsen, Kim Guldstrand; Rasmussen, Jacob Illum
2005-01-01
In this paper, we prove decidability of the optimal conditional reachability problem for multi-priced timed automata, an extension of timed automata with multiple cost variables evolving according to given rates for each location. More precisely, we consider the problem of determining the minimal...
Accuracy Constraint Determination in Fixed-Point System Design
Directory of Open Access Journals (Sweden)
Serizel R
2008-01-01
Full Text Available Most of digital signal processing applications are specified and designed with floatingpoint arithmetic but are finally implemented using fixed-point architectures. Thus, the design flow requires a floating-point to fixed-point conversion stage which optimizes the implementation cost under execution time and accuracy constraints. This accuracy constraint is linked to the application performances and the determination of this constraint is one of the key issues of the conversion process. In this paper, a method is proposed to determine the accuracy constraint from the application performance. The fixed-point system is modeled with an infinite precision version of the system and a single noise source located at the system output. Then, an iterative approach for optimizing the fixed-point specification under the application performance constraint is defined and detailed. Finally the efficiency of our approach is demonstrated by experiments on an MP3 encoder.
Physical optimization of afterloading techniques
International Nuclear Information System (INIS)
Anderson, L.L.
1985-01-01
Physical optimization in brachytherapy refers to the process of determining the radioactive-source configuration which yields a desired dose distribution. In manually afterloaded intracavitary therapy for cervix cancer, discrete source strengths are selected iteratively to minimize the sum of squares of differences between trial and target doses. For remote afterloading with a stepping-source device, optimized (continuously variable) dwell times are obtained, either iteratively or analytically, to give least squares approximations to dose at an arbitrary number of points; in vaginal irradiation for endometrial cancer, the objective has included dose uniformity at applicator surface points in addition to a tapered contour of target dose at depth. For template-guided interstitial implants, seed placement at rectangular-grid mesh points may be least squares optimized within target volumes defined by computerized tomography; effective optimization is possible only for (uniform) seed strength high enough that the desired average peripheral dose is achieved with a significant fraction of empty seed locations. (orig.) [de
Bai, Y.Q.; Lesaja, G.; Roos, C.; Wang, G.Q.; El Ghami, M.
2008-01-01
In this paper we present a class of polynomial primal-dual interior-point algorithms for linear optimization based on a new class of kernel functions. This class is fairly general and includes the classical logarithmic function, the prototype self-regular function, and non-self-regular kernel
DEFF Research Database (Denmark)
Mahmood, Faisal; Johannesen, Helle H; Geertsen, Poul
2017-01-01
An imaging biomarker for early prediction of treatment response potentially provides a non-invasive tool for better prognostics and individualized management of the disease. Radiotherapy (RT) response is generally related to changes in gross tumor volume manifesting months later. In this prospect......An imaging biomarker for early prediction of treatment response potentially provides a non-invasive tool for better prognostics and individualized management of the disease. Radiotherapy (RT) response is generally related to changes in gross tumor volume manifesting months later....... In this prospective study we investigated the apparent diffusion coefficient (ADC), perfusion fraction and pseudo diffusion coefficient derived from diffusion weighted MRI as potential early biomarkers for radiotherapy response of brain metastases. It was a particular aim to assess the optimal time point...
Real-time estimation of FLE for point-based registration
Wiles, Andrew D.; Peters, Terry M.
2009-02-01
In image-guide surgery, optimizing the accuracy in localizing the surgical tools within the virtual reality environment or 3D image is vitally important, significant effort has been spent reducing the measurement errors at the point of interest or target. This target registration error (TRE) is often defined by a root-mean-square statistic which reduces the vector data to a single term that can be minimized. However, lost in the data reduction is the directionality of the error which, can be modelled using a 3D covariance matrix. Recently, we developed a set of expressions that modeled the TRE statistics for point-based registrations as a function of the fiducial marker geometry, target location and the fiducial localizer error (FLE). Unfortunately, these expressions are only as good as the definition of the FLE. In order to close the gap, we have subsequently developed a closed form expression that estimates the FLE as a function of the estimated fiducial registration error (FRE, the error between the measured fiducials and the best fit locations of those fiducials). The FRE covariance matrix is estimated using a sliding window technique and used as input into the closed form expression to estimate the FLE. The estimated FLE can then used to estimate the TRE which, can be given to the surgeon to permit the procedure to be designed such that the errors associated with the point-based registrations are minimized.
Analysis and Optimization of Heterogeneous Real-Time Embedded Systems
DEFF Research Database (Denmark)
Pop, Paul; Eles, Petru; Peng, Zebo
2005-01-01
. The success of such new design methods depends on the availability of analysis and optimization techniques. In this paper, we present analysis and optimization techniques for heterogeneous real-time embedded systems. We address in more detail a particular class of such systems called multi-clusters, composed...... to frames. Optimization heuristics for frame packing aiming at producing a schedulable system are presented. Extensive experiments and a real-life example show the efficiency of the frame-packing approach....
Optimal critic learning for robot control in time-varying environments.
Wang, Chen; Li, Yanan; Ge, Shuzhi Sam; Lee, Tong Heng
2015-10-01
In this paper, optimal critic learning is developed for robot control in a time-varying environment. The unknown environment is described as a linear system with time-varying parameters, and impedance control is employed for the interaction control. Desired impedance parameters are obtained in the sense of an optimal realization of the composite of trajectory tracking and force regulation. Q -function-based critic learning is developed to determine the optimal impedance parameters without the knowledge of the system dynamics. The simulation results are presented and compared with existing methods, and the efficacy of the proposed method is verified.
FEM for time-fractional diffusion equations, novel optimal error analyses
Mustapha, Kassem
2016-01-01
A semidiscrete Galerkin finite element method applied to time-fractional diffusion equations with time-space dependent diffusivity on bounded convex spatial domains will be studied. The main focus is on achieving optimal error results with respect to both the convergence order of the approximate solution and the regularity of the initial data. By using novel energy arguments, for each fixed time $t$, optimal error bounds in the spatial $L^2$- and $H^1$-norms are derived for both cases: smooth...
The timing of control signals underlying fast point-to-point arm movements.
Ghafouri, M; Feldman, A G
2001-04-01
It is known that proprioceptive feedback induces muscle activation when the facilitation of appropriate motoneurons exceeds their threshold. In the suprathreshold range, the muscle-reflex system produces torques depending on the position and velocity of the joint segment(s) that the muscle spans. The static component of the torque-position relationship is referred to as the invariant characteristic (IC). According to the equilibrium-point (EP) hypothesis, control systems produce movements by changing the activation thresholds and thus shifting the IC of the appropriate muscles in joint space. This control process upsets the balance between muscle and external torques at the initial limb configuration and, to regain the balance, the limb is forced to establish a new configuration or, if the movement is prevented, a new level of static torques. Taken together, the joint angles and the muscle torques generated at an equilibrium configuration define a single variable called the EP. Thus by shifting the IC, control systems reset the EP. Muscle activation and movement emerge following the EP resetting because of the natural physical tendency of the system to reach equilibrium. Empirical and simulation studies support the notion that the control IC shifts and the resulting EP shifts underlying fast point-to-point arm movements are gradual rather than step-like. However, controversies exist about the duration of these shifts. Some studies suggest that the IC shifts cease with the movement offset. Other studies propose that the IC shifts end early in comparison to the movement duration (approximately, at peak velocity). The purpose of this study was to evaluate the duration of the IC shifts underlying fast point-to-point arm movements. Subjects made fast (hand peak velocity about 1.3 m/s) planar arm movements toward different targets while grasping a handle. Hand forces applied to the handle and shoulder/elbow torques were, respectively, measured from a force sensor placed
Bofill, Josep Maria; Ribas-Ariño, Jordi; García, Sergio Pablo; Quapp, Wolfgang
2017-10-21
The reaction path of a mechanically induced chemical transformation changes under stress. It is well established that the force-induced structural changes of minima and saddle points, i.e., the movement of the stationary points on the original or stress-free potential energy surface, can be described by a Newton Trajectory (NT). Given a reactive molecular system, a well-fitted pulling direction, and a sufficiently large value of the force, the minimum configuration of the reactant and the saddle point configuration of a transition state collapse at a point on the corresponding NT trajectory. This point is called barrier breakdown point or bond breaking point (BBP). The Hessian matrix at the BBP has a zero eigenvector which coincides with the gradient. It indicates which force (both in magnitude and direction) should be applied to the system to induce the reaction in a barrierless process. Within the manifold of BBPs, there exist optimal BBPs which indicate what is the optimal pulling direction and what is the minimal magnitude of the force to be applied for a given mechanochemical transformation. Since these special points are very important in the context of mechanochemistry and catalysis, it is crucial to develop efficient algorithms for their location. Here, we propose a Gauss-Newton algorithm that is based on the minimization of a positively defined function (the so-called σ-function). The behavior and efficiency of the new algorithm are shown for 2D test functions and for a real chemical example.
Optimal Infinite Runs in One-Clock Priced Timed Automata
DEFF Research Database (Denmark)
David, Alexandre; Ejsing-Duun, Daniel; Fontani, Lisa
We address the problem of finding an infinite run with the optimal cost-time ratio in a one-clock priced timed automaton and pro- vide an algorithmic solution. Through refinements of the quotient graph obtained by strong time-abstracting bisimulation partitioning, we con- struct a graph with time...
Energy Technology Data Exchange (ETDEWEB)
Pei, Ji; Wang, Wen Jie; Yuan, Shouqi [National Research Center of Pumps, Jiangsu University, Zhenjiang (China)
2016-11-15
A wide operating band is important for a pump to safely perform at maximum efficiency while saving energy. To widen the operating range, a multi-point optimization process based on numerical simulations in order to improve impeller performance of a centrifugal pump used in nuclear plant applications is proposed by this research. The Reynolds average Navier Stokes equations are utilized to perform the calculations. The meridional shape of the impeller was optimized based on the following four parameters; shroud arc radius, hub arc radius, shroud angle, and hub angle as the design variables. Efficiencies calculated under 0.6Qd, 1.0Qd and 1.62Qd were selected as the three optimized objectives. The Design of experiment method was applied to generate various impellers while 35 impellers were generated by the Latin hypercube sampling method. A Response surface function based on a second order function was applied to construct a mathematical relationship between the objectives and design variables. A multi-objective genetic algorithm was utilized to solve the response surface function to obtain the best optimized objectives as well as the best combination of design parameters. The results indicated that the pump performance predicted by numerical simulation was in agreement with the experimental performance. The optimized efficiencies based on the three operating conditions were increased by 3.9 %, 6.1 % and 2.6 %, respectively. In addition, the velocity distribution, pressure distribution, streamline and turbulence kinetic energy distribution of the optimized and reference impeller were compared and analyzed to illustrate the performance improvement.
International Nuclear Information System (INIS)
Pei, Ji; Wang, Wen Jie; Yuan, Shouqi
2016-01-01
A wide operating band is important for a pump to safely perform at maximum efficiency while saving energy. To widen the operating range, a multi-point optimization process based on numerical simulations in order to improve impeller performance of a centrifugal pump used in nuclear plant applications is proposed by this research. The Reynolds average Navier Stokes equations are utilized to perform the calculations. The meridional shape of the impeller was optimized based on the following four parameters; shroud arc radius, hub arc radius, shroud angle, and hub angle as the design variables. Efficiencies calculated under 0.6Qd, 1.0Qd and 1.62Qd were selected as the three optimized objectives. The Design of experiment method was applied to generate various impellers while 35 impellers were generated by the Latin hypercube sampling method. A Response surface function based on a second order function was applied to construct a mathematical relationship between the objectives and design variables. A multi-objective genetic algorithm was utilized to solve the response surface function to obtain the best optimized objectives as well as the best combination of design parameters. The results indicated that the pump performance predicted by numerical simulation was in agreement with the experimental performance. The optimized efficiencies based on the three operating conditions were increased by 3.9 %, 6.1 % and 2.6 %, respectively. In addition, the velocity distribution, pressure distribution, streamline and turbulence kinetic energy distribution of the optimized and reference impeller were compared and analyzed to illustrate the performance improvement
Title XVI / Supplemental Security Record Point In Time (SSRPT)
Social Security Administration — This is the point-in-time database to house temporary Supplemental Security Record (SSR) images produced during the course of the operating day before they can be...
Optimization of Partitioned Architectures to Support Soft Real-Time Applications
DEFF Research Database (Denmark)
Tamas-Selicean, Domitian; Pop, Paul
2014-01-01
In this paper we propose a new Tabu Search-based design optimization strategy for mixed-criticality systems implementing hard and soft real-time applications on the same platform. Our proposed strategy determined an implementation such that all hard real-time applications are schedulable and the ......In this paper we propose a new Tabu Search-based design optimization strategy for mixed-criticality systems implementing hard and soft real-time applications on the same platform. Our proposed strategy determined an implementation such that all hard real-time applications are schedulable...... and the quality of service of the soft real-time tasks is maximized. We have evaluated our strategy using an aerospace case study....
On-Board Real-Time Optimization Control for Turbo-Fan Engine Life Extending
Zheng, Qiangang; Zhang, Haibo; Miao, Lizhen; Sun, Fengyong
2017-11-01
A real-time optimization control method is proposed to extend turbo-fan engine service life. This real-time optimization control is based on an on-board engine mode, which is devised by a MRR-LSSVR (multi-input multi-output recursive reduced least squares support vector regression method). To solve the optimization problem, a FSQP (feasible sequential quadratic programming) algorithm is utilized. The thermal mechanical fatigue is taken into account during the optimization process. Furthermore, to describe the engine life decaying, a thermal mechanical fatigue model of engine acceleration process is established. The optimization objective function not only contains the sub-item which can get fast response of the engine, but also concludes the sub-item of the total mechanical strain range which has positive relationship to engine fatigue life. Finally, the simulations of the conventional optimization control which just consider engine acceleration performance or the proposed optimization method have been conducted. The simulations demonstrate that the time of the two control methods from idle to 99.5 % of the maximum power are equal. However, the engine life using the proposed optimization method could be surprisingly increased by 36.17 % compared with that using conventional optimization control.
Minimum Time Trajectory Optimization of CNC Machining with Tracking Error Constraints
Directory of Open Access Journals (Sweden)
Qiang Zhang
2014-01-01
Full Text Available An off-line optimization approach of high precision minimum time feedrate for CNC machining is proposed. Besides the ordinary considered velocity, acceleration, and jerk constraints, dynamic performance constraint of each servo drive is also considered in this optimization problem to improve the tracking precision along the optimized feedrate trajectory. Tracking error is applied to indicate the servo dynamic performance of each axis. By using variable substitution, the tracking error constrained minimum time trajectory planning problem is formulated as a nonlinear path constrained optimal control problem. Bang-bang constraints structure of the optimal trajectory is proved in this paper; then a novel constraint handling method is proposed to realize a convex optimization based solution of the nonlinear constrained optimal control problem. A simple ellipse feedrate planning test is presented to demonstrate the effectiveness of the approach. Then the practicability and robustness of the trajectory generated by the proposed approach are demonstrated by a butterfly contour machining example.
Atomic Stretch: Optimally bounded real-time stretching and beyond
DEFF Research Database (Denmark)
Jensen, Rasmus Ramsbøl; Nielsen, Jannik Boll
2016-01-01
Atomic Stretch is a plugin for your preferred Adobe video editing tool, allowing real-time smooth and optimally bounded retarget-ting from and to any aspect ratio. The plugin allows preserving of high interest pixels through a protected region, attention redirection through color-modification, co......Atomic Stretch is a plugin for your preferred Adobe video editing tool, allowing real-time smooth and optimally bounded retarget-ting from and to any aspect ratio. The plugin allows preserving of high interest pixels through a protected region, attention redirection through color...
Optimal Design of Modern Transformerless PV Inverter Topologies
DEFF Research Database (Denmark)
Saridakis, Stefanos; Koutroulis, Eftichios; Blaabjerg, Frede
2013-01-01
the operational lifetime period of the PV installation, is also considered in the optimization process. According to the results of the proposed design method, different optimal values of the PV inverter design variables are derived for each PV inverter topology and installation site. The H5, H6, neutral point...... clamped, active-neutral point clamped and conergy-NPC PV inverters designed using the proposed optimization process feature lower levelized cost of generated electricity and lifetime cost, longer mean time between failures and inject more PV-generated energy into the electric grid than their nonoptimized......The design optimization of H5, H6, neutral point clamped, active-neutral point clamped, and conergy-NPC transformerless photovoltaic (PV) inverters is presented in this paper. The components reliability in terms of the corresponding malfunctions, affecting the PV inverter maintenance cost during...
Visualizing Robustness of Critical Points for 2D Time-Varying Vector Fields
Wang, B.
2013-06-01
Analyzing critical points and their temporal evolutions plays a crucial role in understanding the behavior of vector fields. A key challenge is to quantify the stability of critical points: more stable points may represent more important phenomena or vice versa. The topological notion of robustness is a tool which allows us to quantify rigorously the stability of each critical point. Intuitively, the robustness of a critical point is the minimum amount of perturbation necessary to cancel it within a local neighborhood, measured under an appropriate metric. In this paper, we introduce a new analysis and visualization framework which enables interactive exploration of robustness of critical points for both stationary and time-varying 2D vector fields. This framework allows the end-users, for the first time, to investigate how the stability of a critical point evolves over time. We show that this depends heavily on the global properties of the vector field and that structural changes can correspond to interesting behavior. We demonstrate the practicality of our theories and techniques on several datasets involving combustion and oceanic eddy simulations and obtain some key insights regarding their stable and unstable features. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.
Visualizing Robustness of Critical Points for 2D Time-Varying Vector Fields
Wang, B.; Rosen, P.; Skraba, P.; Bhatia, H.; Pascucci, V.
2013-01-01
Analyzing critical points and their temporal evolutions plays a crucial role in understanding the behavior of vector fields. A key challenge is to quantify the stability of critical points: more stable points may represent more important phenomena or vice versa. The topological notion of robustness is a tool which allows us to quantify rigorously the stability of each critical point. Intuitively, the robustness of a critical point is the minimum amount of perturbation necessary to cancel it within a local neighborhood, measured under an appropriate metric. In this paper, we introduce a new analysis and visualization framework which enables interactive exploration of robustness of critical points for both stationary and time-varying 2D vector fields. This framework allows the end-users, for the first time, to investigate how the stability of a critical point evolves over time. We show that this depends heavily on the global properties of the vector field and that structural changes can correspond to interesting behavior. We demonstrate the practicality of our theories and techniques on several datasets involving combustion and oceanic eddy simulations and obtain some key insights regarding their stable and unstable features. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.
Multiobjective optimization in Gene Expression Programming for Dew Point
Shroff, Siddharth; Dabhi, Vipul
2013-01-01
The processes occurring in climatic change evolution and their variations play a major role in environmental engineering. Different techniques are used to model the relationship between temperatures, dew point and relative humidity. Gene expression programming is capable of modelling complex realities with great accuracy, allowing, at the same time, the extraction of knowledge from the evolved models compared to other learning algorithms. This research aims to use Gene Expression Programming ...
An optimal cut-off point for the calving interval may be used as an indicator of bovine abortions.
Bronner, Anne; Morignat, Eric; Gay, Emilie; Calavas, Didier
2015-10-01
The bovine abortion surveillance system in France aims to detect as early as possible any resurgence of bovine brucellosis, a disease of which the country has been declared free since 2005. It relies on the mandatory notification and testing of each aborting cow, but under-reporting is high. This research uses a new and simple approach which considers the calving interval (CI) as a "diagnostic test" to determine optimal cut-off point c and estimate diagnostic performance of the CI to identify aborting cows, and herds with multiple abortions (i.e. three or more aborting cows per calving season). The period between two artificial inseminations (AI) was considered as a "gold standard". During the 2006-2010 calving seasons, the mean optimal CI cut-off point for identifying aborting cows was 691 days for dairy cows and 703 days for beef cows. Depending on the calving season, production type and scale at which c was computed (individual or herd), the average sensitivity of the CI varied from 42.6% to 64.4%; its average specificity from 96.7% to 99.7%; its average positive predictive value from 27.6% to 65.4%; and its average negative predictive value from 98.7% to 99.8%. When applied to the French bovine population as a whole, this indicator identified 2-3% of cows suspected to have aborted, and 10-15% of herds suspected of multiple abortions. The optimal cut-off point and CI performance were consistent over calving seasons. By applying an optimal CI cut-off point to the cattle demographics database, it becomes possible to identify herds with multiple abortions, carry out retrospective investigations to find the cause of these abortions and monitor a posteriori compliance of farmers with their obligation to report abortions for brucellosis surveillance needs. Therefore, the CI could be used as an indicator of abortions to help improve the current mandatory notification surveillance system. Copyright © 2015 Elsevier B.V. All rights reserved.
Time dependent optimal switching controls in online selling models
Energy Technology Data Exchange (ETDEWEB)
Bradonjic, Milan [Los Alamos National Laboratory; Cohen, Albert [MICHIGAN STATE UNIV
2010-01-01
We present a method to incorporate dishonesty in online selling via a stochastic optimal control problem. In our framework, the seller wishes to maximize her average wealth level W at a fixed time T of her choosing. The corresponding Hamilton-Jacobi-Bellmann (HJB) equation is analyzed for a basic case. For more general models, the admissible control set is restricted to a jump process that switches between extreme values. We propose a new approach, where the optimal control problem is reduced to a multivariable optimization problem.
Resource-Optimal Scheduling Using Priced Timed Automata
DEFF Research Database (Denmark)
Larsen, Kim Guldstrand; Rasmussen, Jacob Illum; Subramani, K.
2004-01-01
In this paper, we show how the simple structure of the linear programs encountered during symbolic minimum-cost reachability analysis of priced timed automata can be exploited in order to substantially improve the performance of the current algorithm. The idea is rooted in duality of linear......-80 percent performance gain. As a main application area, we show how to solve energy-optimal task graph scheduling problems using the framework of priced timed automata....
Minimum Time Path Planning for Robotic Manipulator in Drilling/ Spot Welding Tasks
Directory of Open Access Journals (Sweden)
Qiang Zhang
2016-04-01
Full Text Available In this paper, a minimum time path planning strategy is proposed for multi points manufacturing problems in drilling/spot welding tasks. By optimizing the travelling schedule of the set points and the detailed transfer path between points, the minimum time manufacturing task is realized under fully utilizing the dynamic performance of robotic manipulator. According to the start-stop movement in drilling/spot welding task, the path planning problem can be converted into a traveling salesman problem (TSP and a series of point to point minimum time transfer path planning problems. Cubic Hermite interpolation polynomial is used to parameterize the transfer path and then the path parameters are optimized to obtain minimum point to point transfer time. A new TSP with minimum time index is constructed by using point-point transfer time as the TSP parameter. The classical genetic algorithm (GA is applied to obtain the optimal travelling schedule. Several minimum time drilling tasks of a 3-DOF robotic manipulator are used as examples to demonstrate the effectiveness of the proposed approach.
Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method
Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.
2018-01-01
Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.
Investment under uncertainty : Timing and capacity optimization
Wen, Xingang
2017-01-01
This thesis consists of three chapters on analyzing the optimal investment timing and investment capacity for the firm(s) undertaking irreversible investment in an uncertain environment. Chapter 2 studies the investment decision of a monopoly firm when it can adjust output quantity in a market with
PROCESS TIME OPTIMIZATION IN DEPOSITOR AND FILLER
Directory of Open Access Journals (Sweden)
Jesús Iván Ruíz-Ibarra
2017-07-01
Full Text Available As in any industry, in soft drink manufacturing demand, customer service and production is of great importance that forces this production to have their equipment and production machines in optimal conditions for the product to be in the hands of the consumer without delays, therefore it is important to have the established times of each process, since the syrup is elaborated, packaged, distributed, until it is purchased by the consumer. After a chronometer analysis, the most common faults were detected in each analyzed process. In the filler machine the most frequent faults are: accumulation of bottles in the subsequent and previous processes to filling process, which in general the cause of the collection of bottles is due to failures in the other equipment of the production line. In the process of unloading the most common faults are: boxes jammed in bump and pusher (pushing boxes; boxes fallen in rollers and platforms transporter. According to observations in each machine, the actions to be followed are presented to solve the problems that arise. Also described the methodology to obtain results, to data analyze and decisions. Firstly an analysis of operations is done to know each machine, supported by the manuals of the machines and the operators themselves a study of times is done by chronometer to determine the standard time of the process where also they present the most common faults, then observations are made on the machines according to the determined sample size, thus obtaining the information necessary to take measurements and to make the study of optimization of the production processes. An analysis of the predetermined process times is also performed by the MTM methods and the MOST time analysis. The results of operators with MTM: Fault Filler = 0.846 minutes, Faultless Filler = 0.61 minutes, Fault Breaker = 0.74 minutes and Fault Flasher = 0.45 minutes. The results of MOST operators are: Fault Filler = 2.58 minutes, Filler Fails
Intelligent flame analysis for an optimized combustion
Energy Technology Data Exchange (ETDEWEB)
Stephan Peper; Dirk Schmidt [ABB Utilities GmbH, Mainheimm (Germany)
2003-07-01
One of the primary challenges in the area of process control is to ensure that many competing optimization goals are accomplished at the same time and be considered in time. This paper describes a successful approach through the use of an advanced pattern recognition technology and intelligent optimization tool modeling combustion processes more precisely and optimizing them based on a holistic view. 17 PowerPoint slides are also available in the proceedings. 5 figs., 1 tab.
Daripa, Prabir
2011-11-01
We numerically investigate the optimal viscous profile in constant time injection policy of enhanced oil recovery. In particular, we investigate the effect of a combination of interfacial and layer instabilities in three-layer porous media flow on the overall growth of instabilities and thereby characterize the optimal viscous profile. Results based on monotonic and non-monotonic viscous profiles will be presented. Time permitting. we will also present results on multi-layer porous media flows for Newtonian and non-Newtonian fluids and compare the results. The support of Qatar National Fund under a QNRF Grant is acknowledged.
Directory of Open Access Journals (Sweden)
Dovrat Kohen
2017-06-01
Full Text Available When subjects are intentionally preparing a curved trajectory, they are engaged in a time-consuming trajectory planning process that is separate from target selection. To investigate the construction of such a plan, we examined the effect of artificially shortening preparation time on the performance of intentionally curved trajectories using the Timed Response task that enforces initiation of movements prematurely. Fifteen subjects performed obstacle avoidance movements toward one of four targets that were presented 25 or 350 ms before the “go” signal, imposing short and long preparation time conditions with mean values of 170 ms and 493 ms, respectively. While trajectories with short preparation times showed target specificity at their onset, they were significantly more variable and showed larger angular deviations from the lines connecting their initial position and the target, compared to the trajectories with long preparation times. Importantly, the trajectories of the short preparation time movements still reached their end-point targets accurately, with comparable movement durations. We hypothesize that success in the short preparation time condition is a result of an online control mechanism that allows further refinement of the plan during its execution and study this control mechanism with a novel trajectory analysis approach using minimum jerk optimization and geometrical modeling approaches. Results show a later agreement of the short preparation time trajectories with the optimal minimum jerk trajectory, accompanied by a later initiation of a parabolic segment. Both observations are consistent with the existence of an online trajectory planning process.Our results suggest that when preparation time is not sufficiently long, subjects execute a more variable and less optimally prepared initial trajectory and exploit online control mechanisms to refine their actions on the fly.
Kohen, Dovrat; Karklinsky, Matan; Meirovitch, Yaron; Flash, Tamar; Shmuelof, Lior
2017-01-01
When subjects are intentionally preparing a curved trajectory, they are engaged in a time-consuming trajectory planning process that is separate from target selection. To investigate the construction of such a plan, we examined the effect of artificially shortening preparation time on the performance of intentionally curved trajectories using the Timed Response task that enforces initiation of movements prematurely. Fifteen subjects performed obstacle avoidance movements toward one of four targets that were presented 25 or 350 ms before the “go” signal, imposing short and long preparation time conditions with mean values of 170 ms and 493 ms, respectively. While trajectories with short preparation times showed target specificity at their onset, they were significantly more variable and showed larger angular deviations from the lines connecting their initial position and the target, compared to the trajectories with long preparation times. Importantly, the trajectories of the short preparation time movements still reached their end-point targets accurately, with comparable movement durations. We hypothesize that success in the short preparation time condition is a result of an online control mechanism that allows further refinement of the plan during its execution and study this control mechanism with a novel trajectory analysis approach using minimum jerk optimization and geometrical modeling approaches. Results show a later agreement of the short preparation time trajectories with the optimal minimum jerk trajectory, accompanied by a later initiation of a parabolic segment. Both observations are consistent with the existence of an online trajectory planning process.Our results suggest that when preparation time is not sufficiently long, subjects execute a more variable and less optimally prepared initial trajectory and exploit online control mechanisms to refine their actions on the fly. PMID:28706478
Optimal task mapping in safety-critical real-time parallel systems
International Nuclear Information System (INIS)
Aussagues, Ch.
1998-01-01
This PhD thesis is dealing with the correct design of safety-critical real-time parallel systems. Such systems constitutes a fundamental part of high-performance systems for command and control that can be found in the nuclear domain or more generally in parallel embedded systems. The verification of their temporal correctness is the core of this thesis. our contribution is mainly in the following three points: the analysis and extension of a programming model for such real-time parallel systems; the proposal of an original method based on a new operator of synchronized product of state machines task-graphs; the validation of the approach by its implementation and evaluation. The work addresses particularly the main problem of optimal task mapping on a parallel architecture, such that the temporal constraints are globally guaranteed, i.e. the timeliness property is valid. The results incorporate also optimally criteria for the sizing and correct dimensioning of a parallel system, for instance in the number of processing elements. These criteria are connected with operational constraints of the application domain. Our approach is based on the off-line analysis of the feasibility of the deadline-driven dynamic scheduling that is used to schedule tasks inside one processor. This leads us to define the synchronized-product, a system of linear, constraints is automatically generated and then allows to calculate a maximum load of a group of tasks and then to verify their timeliness constraints. The communications, their timeliness verification and incorporation to the mapping problem is the second main contribution of this thesis. FInally, the global solving technique dealing with both task and communication aspects has been implemented and evaluated in the framework of the OASIS project in the LETI research center at the CEA/Saclay. (author)
Das, Saptarshi; Pan, Indranil; Das, Shantanu
2013-07-01
Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng
2014-01-01
In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC. PMID:24715818
Liu, Weihua; Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng
2014-01-01
In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC.
Motamed, Nima; Miresmail, Seyed Javad Haji; Rabiee, Behnam; Keyvani, Hossein; Farahani, Behzad; Maadi, Mansooreh; Zamani, Farhad
2016-03-01
The present study was carried out to determine the optimal cutoff points for homeostatic model assessment (HOMA-IR) and quantitative insulin sensitivity check index (QUICKI) in the diagnosis of metabolic syndrome (MetS) and non-alcoholic fatty liver disease (NAFLD). The baseline data of 5511 subjects aged ≥18years of a cohort study in northern Iran were utilized to analyze. Receiver operating characteristic (ROC) analysis was conducted to determine the discriminatory capability of HOMA-IR and QUICKI in the diagnosis of MetS and NAFLD. Youden index was utilized to determine the optimal cutoff points of HOMA-IR and QUICKI in the diagnosis of MetS and NAFLD. The optimal cutoff points for HOMA-IR in the diagnosis of MetS and NAFLD were 2.0 [sensitivity=64.4%, specificity=66.8%] and 1.79 [sensitivity=66.2%, specificity=62.2%] in men and were 2.5 [sensitivity=57.6%, specificity=67.9%] and 1.95 [sensitivity=65.1%, specificity=54.7%] in women respectively. Furthermore, the optimal cutoff points for QUICKI in the diagnosis of MetS and NAFLD were 0.343 [sensitivity=63.7%, specificity=67.8%] and 0.347 [sensitivity=62.9%, specificity=65.0%] in men and were 0.331 [sensitivity=55.7%, specificity=70.7%] and 0.333 [sensitivity=53.2%, specificity=67.7%] in women respectively. Not only the optimal cutoff points of HOMA-IR and QUICKI were different for MetS and NAFLD, but also different cutoff points were obtained for men and women for each of these two conditions. Copyright © 2016 Elsevier Inc. All rights reserved.
Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time
Daheng Peng; Fang Zhang
2017-01-01
In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.
Optimal Real-time Dispatch for Integrated Energy Systems
DEFF Research Database (Denmark)
Anvari-Moghaddam, Amjad; Guerrero, Josep M.; Rahimi-Kian, Ashkan
2016-01-01
With the emerging of small-scale integrated energy systems (IESs), there are significant potentials to increase the functionality of a typical demand-side management (DSM) strategy and typical implementation of building-level distributed energy resources (DERs). By integrating DSM and DERs...... into a cohesive, networked package that fully utilizes smart energy-efficient end-use devices, advanced building control/automation systems, and integrated communications architectures, it is possible to efficiently manage energy and comfort at the end-use location. In this paper, an ontology-driven multi......-agent control system with intelligent optimizers is proposed for optimal real-time dispatch of an integrated building and microgrid system considering coordinated demand response (DR) and DERs management. The optimal dispatch problem is formulated as a mixed integer nonlinear programing problem (MINLP...
Robust Optimization for Time-Cost Tradeoff Problem in Construction Projects
Directory of Open Access Journals (Sweden)
Ming Li
2014-01-01
Full Text Available Construction projects are generally subject to uncertainty, which influences the realization of time-cost tradeoff in project management. This paper addresses a time-cost tradeoff problem under uncertainty, in which activities in projects can be executed in different construction modes corresponding to specified time and cost with interval uncertainty. Based on multiobjective robust optimization method, a robust optimization model for time-cost tradeoff problem is developed. In order to illustrate the robust model, nondominated sorting genetic algorithm-II (NSGA-II is modified to solve the project example. The results show that, by means of adjusting the time and cost robust coefficients, the robust Pareto sets for time-cost tradeoff can be obtained according to different acceptable risk level, from which the decision maker could choose the preferred construction alternative.
Keren, Baruch; Pliskin, Joseph S
2011-12-01
The optimal timing for performing radical medical procedures as joint (e.g., hip) replacement must be seriously considered. In this paper we show that under deterministic assumptions the optimal timing for joint replacement is a solution of a mathematical programming problem, and under stochastic assumptions the optimal timing can be formulated as a stochastic programming problem. We formulate deterministic and stochastic models that can serve as decision support tools. The results show that the benefit from joint replacement surgery is heavily dependent on timing. Moreover, for a special case where the patient's remaining life is normally distributed along with a normally distributed survival of the new joint, the expected benefit function from surgery is completely solved. This enables practitioners to draw the expected benefit graph, to find the optimal timing, to evaluate the benefit for each patient, to set priorities among patients and to decide if joint replacement should be performed and when.
Optimal Retention Level for Infinite Time Horizons under MADM
Directory of Open Access Journals (Sweden)
Başak Bulut Karageyik
2016-12-01
Full Text Available In this paper, we approximate the aggregate claims process by using the translated gamma process under the classical risk model assumptions, and we investigate the ultimate ruin probability. We consider optimal reinsurance under the minimum ultimate ruin probability, as well as the maximum benefit criteria: released capital, expected profit and exponential-fractional-logarithmic utility from the insurer’s point of view. Numerical examples are presented to explain how the optimal initial surplus and retention level are changed according to the individual claim amounts, loading factors and weights of the criteria. In the decision making process, we use The Analytical Hierarchy Process (AHP and The Technique for Order of Preference by Similarity to ideal Solution (TOPSIS methods as the Multi-Attribute Decision Making methods (MADM and compare our results considering different combinations of loading factors for both exponential and Pareto individual claims.
WANG, Qingrong; ZHU, Changfeng; LI, Ying; ZHANG, Zhengkun
2017-06-01
Considering the time dependence of emergency logistic network and complexity of the environment that the network exists in, in this paper the time dependent network optimization theory and robust discrete optimization theory are combined, and the emergency logistics dynamic network optimization model with characteristics of robustness is built to maximize the timeliness of emergency logistics. On this basis, considering the complexity of dynamic network and the time dependence of edge weight, an improved ant colony algorithm is proposed to realize the coupling of the optimization algorithm and the network time dependence and robustness. Finally, a case study has been carried out in order to testify validity of this robustness optimization model and its algorithm, and the value of different regulation factors was analyzed considering the importance of the value of the control factor in solving the optimal path. Analysis results show that this model and its algorithm above-mentioned have good timeliness and strong robustness.
Integrals of Motion for Discrete-Time Optimal Control Problems
Torres, Delfim F. M.
2003-01-01
We obtain a discrete time analog of E. Noether's theorem in Optimal Control, asserting that integrals of motion associated to the discrete time Pontryagin Maximum Principle can be computed from the quasi-invariance properties of the discrete time Lagrangian and discrete time control system. As corollaries, results for first-order and higher-order discrete problems of the calculus of variations are obtained.
Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time
Directory of Open Access Journals (Sweden)
Daheng Peng
2017-10-01
Full Text Available In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.
Communication: Time-dependent optimized coupled-cluster method for multielectron dynamics
Sato, Takeshi; Pathak, Himadri; Orimo, Yuki; Ishikawa, Kenichi L.
2018-02-01
Time-dependent coupled-cluster method with time-varying orbital functions, called time-dependent optimized coupled-cluster (TD-OCC) method, is formulated for multielectron dynamics in an intense laser field. We have successfully derived the equations of motion for CC amplitudes and orthonormal orbital functions based on the real action functional, and implemented the method including double excitations (TD-OCCD) and double and triple excitations (TD-OCCDT) within the optimized active orbitals. The present method is size extensive and gauge invariant, a polynomial cost-scaling alternative to the time-dependent multiconfiguration self-consistent-field method. The first application of the TD-OCC method of intense-laser driven correlated electron dynamics in Ar atom is reported.
Genetic algorithm for project time-cost optimization in fuzzy environment
Directory of Open Access Journals (Sweden)
Khan Md. Ariful Haque
2012-12-01
Full Text Available Purpose: The aim of this research is to develop a more realistic approach to solve project time-cost optimization problem under uncertain conditions, with fuzzy time periods. Design/methodology/approach: Deterministic models for time-cost optimization are never efficient considering various uncertainty factors. To make such problems realistic, triangular fuzzy numbers and the concept of a-cut method in fuzzy logic theory are employed to model the problem. Because of NP-hard nature of the project scheduling problem, Genetic Algorithm (GA has been used as a searching tool. Finally, Dev-C++ 4.9.9.2 has been used to code this solver. Findings: The solution has been performed under different combinations of GA parameters and after result analysis optimum values of those parameters have been found for the best solution. Research limitations/implications: For demonstration of the application of the developed algorithm, a project on new product (Pre-paid electric meter, a project under government finance launching has been chosen as a real case. The algorithm is developed under some assumptions. Practical implications: The proposed model leads decision makers to choose the desired solution under different risk levels. Originality/value: Reports reveal that project optimization problems have never been solved under multiple uncertainty conditions. Here, the function has been optimized using Genetic Algorithm search technique, with varied level of risks and fuzzy time periods.
Dual time-point FDG PET/CT for differentiating benign from ...
African Journals Online (AJOL)
Maximum standard uptake values (SUVmax) with the greatest uptake in the lesion were calculated for two time points (SUV1 and SUV2), and the percentage change over time per lesion was calculated (%DSUV). Routine histological findings served as the gold standard. Results. Histological examination showed that 14 ...
Bonnetain, Franck; Bonsing, Bert; Conroy, Thierry; Dousseau, Adelaide; Glimelius, Bengt; Haustermans, Karin; Lacaine, François; Van Laethem, Jean Luc; Aparicio, Thomas; Aust, Daniela; Bassi, Claudio; Berger, Virginie; Chamorey, Emmanuel; Chibaudel, Benoist; Dahan, Laeticia; De Gramont, Aimery; Delpero, Jean Robert; Dervenis, Christos; Ducreux, Michel; Gal, Jocelyn; Gerber, Erich; Ghaneh, Paula; Hammel, Pascal; Hendlisz, Alain; Jooste, Valérie; Labianca, Roberto; Latouche, Aurelien; Lutz, Manfred; Macarulla, Teresa; Malka, David; Mauer, Muriel; Mitry, Emmanuel; Neoptolemos, John; Pessaux, Patrick; Sauvanet, Alain; Tabernero, Josep; Taieb, Julien; van Tienhoven, Geertjan; Gourgou-Bourgade, Sophie; Bellera, Carine; Mathoulin-Pélissier, Simone; Collette, Laurence
2014-11-01
Using potential surrogate end-points for overall survival (OS) such as Disease-Free- (DFS) or Progression-Free Survival (PFS) is increasingly common in randomised controlled trials (RCTs). However, end-points are too often imprecisely defined which largely contributes to a lack of homogeneity across trials, hampering comparison between them. The aim of the DATECAN (Definition for the Assessment of Time-to-event End-points in CANcer trials)-Pancreas project is to provide guidelines for standardised definition of time-to-event end-points in RCTs for pancreatic cancer. Time-to-event end-points currently used were identified from a literature review of pancreatic RCT trials (2006-2009). Academic research groups were contacted for participation in order to select clinicians and methodologists to participate in the pilot and scoring groups (>30 experts). A consensus was built after 2 rounds of the modified Delphi formal consensus approach with the Rand scoring methodology (range: 1-9). For pancreatic cancer, 14 time to event end-points and 25 distinct event types applied to two settings (detectable disease and/or no detectable disease) were considered relevant and included in the questionnaire sent to 52 selected experts. Thirty experts answered both scoring rounds. A total of 204 events distributed over the 14 end-points were scored. After the first round, consensus was reached for 25 items; after the second consensus was reached for 156 items; and after the face-to-face meeting for 203 items. The formal consensus approach reached the elaboration of guidelines for standardised definitions of time-to-event end-points allowing cross-comparison of RCTs in pancreatic cancer. Copyright © 2014 Elsevier Ltd. All rights reserved.
Optimal pricing of non-utility generated electric power
International Nuclear Information System (INIS)
Siddiqi, S.N.; Baughman, M.L.
1994-01-01
The importance of an optimal pricing policy for pricing non-utility generated power is pointed out in this paper. An optimal pricing policy leads to benefits for all concerned: the utility, industry, and the utility's other customers. In this paper, it is shown that reliability differentiated real-time pricing provides an optimal non-utility generated power pricing policy, from a societal welfare point of view. Firm capacity purchase, and hence an optimal price for purchasing firm capacity, are an integral part of this pricing policy. A case study shows that real-time pricing without firm capacity purchase results in improper investment decisions and higher costs for the system as a whole. Without explicit firm capacity purchase, the utility makes greater investment in capacity addition in order to meet its reliability criteria than is socially optimal. It is concluded that the non-utility generated power pricing policy presented in this paper and implied by reliability differentiated pricing policy results in social welfare-maximizing investment and operation decisions
Real Time Optimal Control of Supercapacitor Operation for Frequency Response
Energy Technology Data Exchange (ETDEWEB)
Luo, Yusheng; Panwar, Mayank; Mohanpurkar, Manish; Hovsapian, Rob
2016-07-01
Supercapacitors are gaining wider applications in power systems due to fast dynamic response. Utilizing supercapacitors by means of power electronics interfaces for power compensation is a proven effective technique. For applications such as requency restoration if the cost of supercapacitors maintenance as well as the energy loss on the power electronics interfaces are addressed. It is infeasible to use traditional optimization control methods to mitigate the impacts of frequent cycling. This paper proposes a Front End Controller (FEC) using Generalized Predictive Control featuring real time receding optimization. The optimization constraints are based on cost and thermal management to enhance to the utilization efficiency of supercapacitors. A rigorous mathematical derivation is conducted and test results acquired from Digital Real Time Simulator are provided to demonstrate effectiveness.
Spectroscopic determination of optimal hydration time of zircon surface
Energy Technology Data Exchange (ETDEWEB)
Ordonez R, E. [ININ, Departamento de Quimica, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Garcia R, G. [Instituto Tecnologico de Toluca, Division de Estudios del Posgrado, Av. Tecnologico s/n, Ex-Rancho La Virgen, 52140 Metepec, Estado de Mexico (Mexico); Garcia G, N., E-mail: eduardo.ordonez@inin.gob.m [Universidad Autonoma del Estado de Mexico, Facultad de Quimica, Av. Colon y Av. Tollocan, 50180 Toluca, Estado de Mexico (Mexico)
2010-07-01
When a mineral surface is immersed in an aqueous solution, it develops and electric charge produced by the amphoteric dissociation of hydroxyl groups created by the hydration of the solid surface. This is one influential surface property. The complete hydration process takes a time which is specific for each mineral species. The knowledge of the aqueous solution contact time for complete surface hydration is mandatory for further surface phenomena studies. This study deals with the optimal hydration time of the raw zircon (ZrSiO{sub 4}) surface comparing the classical potentiometric titrations with a fluorescence spectroscopy technique. The latter is easy and rea liable as it demands only one sample batch to determine the optimal time to ensure a total hydration of the zircon surface. The analytical results of neutron activation analysis showed the presence of trace quantities of Dy{sup 3+}, Eu{sup 3+} and Er{sup 3} in the bulk of zircon. The Dy{sup 3+} is structured in the zircon crystalline lattice and undergoes the same chemical reactions as zircon. Furthermore, the Dy{sup 3+} has a good fluorescent response whose intensity is enhanced by hydration molecules. The results show that, according to the potentiometric analysis, the hydration process for each batch (at least 8 sample batches) takes around 2 h, while the spectrometric method indicates only 5 minutes from only one batch. Both methods showed that the zircon surface have a 16 h optimal hydration time. (Author)
Spectroscopic determination of optimal hydration time of zircon surface
International Nuclear Information System (INIS)
Ordonez R, E.; Garcia R, G.; Garcia G, N.
2010-01-01
When a mineral surface is immersed in an aqueous solution, it develops and electric charge produced by the amphoteric dissociation of hydroxyl groups created by the hydration of the solid surface. This is one influential surface property. The complete hydration process takes a time which is specific for each mineral species. The knowledge of the aqueous solution contact time for complete surface hydration is mandatory for further surface phenomena studies. This study deals with the optimal hydration time of the raw zircon (ZrSiO 4 ) surface comparing the classical potentiometric titrations with a fluorescence spectroscopy technique. The latter is easy and rea liable as it demands only one sample batch to determine the optimal time to ensure a total hydration of the zircon surface. The analytical results of neutron activation analysis showed the presence of trace quantities of Dy 3+ , Eu 3+ and Er 3 in the bulk of zircon. The Dy 3+ is structured in the zircon crystalline lattice and undergoes the same chemical reactions as zircon. Furthermore, the Dy 3+ has a good fluorescent response whose intensity is enhanced by hydration molecules. The results show that, according to the potentiometric analysis, the hydration process for each batch (at least 8 sample batches) takes around 2 h, while the spectrometric method indicates only 5 minutes from only one batch. Both methods showed that the zircon surface have a 16 h optimal hydration time. (Author)
Spectrum optimization-based chaotification using time-delay feedback control
International Nuclear Information System (INIS)
Zhou Jiaxi; Xu Daolin; Zhang Jing; Liu Chunrong
2012-01-01
Highlights: ► A time-delay feedback controller is designed for chaotification. ► A spectrum optimization method is proposed to determine chaotification parameters. ► Numerical examples verify the spectrum optimization- based chaotification method. ► Engineering application in line spectrum reconfiguration is demonstrated. - Abstract: In this paper, a spectrum optimization method is developed for chaotification in conjunction with an application in line spectrum reconfiguration. A key performance index (the objective function) based on Fourier spectrum is specially devised with the idea of suppressing spectrum spikes and broadening frequency band. Minimization of the index empowered by a genetic algorithm enables to locate favorable parameters of the time-delay feedback controller, by which a line spectrum of harmonic vibration can be transformed into a broad-band continuous spectrum of chaotic motion. Numerical simulations are carried out to verify the feasibility of the method and to demonstrate its effectiveness of chaotifying a 2-DOFs linear mechanical system.
Optimal replacement time estimation for machines and equipment based on cost function
J. Šebo; J. Buša; P. Demeč; J. Svetlík
2013-01-01
The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables). Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is ...
International Nuclear Information System (INIS)
Chatterjee, Arnab; Zhang, Lijun; Xia, Xiaohua
2015-01-01
Highlights: • DSM techniques are applied to an underground mine ventilation network. • A minimization model is solved to find the optimal speeds of the main mine fan. • Ventilation on demand (VOD) leads to a saving of USD 213160. • The optimal mining schedule, together with VOD, leads to a saving of USD 277035. • According to a case study, a maximum of 2 540 035 kW h can be saved per year. - Abstract: In the current situation of the energy crisis, the mining industry has been identified as a promising area for application of demand side management (DSM) techniques. This paper investigates the potential for energy-cost savings and actual energy savings, by implementation of variable speed drives to ventilation fans in underground mines. In particular, ventilation on demand is considered in the study, i.e., air volume is adjusted according to the demand at varying times. Two DSM strategies, energy efficiency (EE) and load management (LM), are formulated and analysed. By modelling the network with the aid of Kirchhoff’s laws and Tellegen’s theorem, a nonlinear constrained minimization model is developed, with the objective of achieving EE. The model is also made to adhere to the fan laws, such that the fan power at its operating points is found to achieve realistic results. LM is achieved by finding the optimal starting time of the mining schedule, according to the time of use (TOU) tariff. A case study is shown to demonstrate the effects of the optimization model. The study suggests that by combining load shifting and energy efficiency techniques, an annual energy saving of 2 540 035 kW h is possible, leading to an annual cost saving of USD 277035
Real-time estimation of optical flow based on optimized haar wavelet features
DEFF Research Database (Denmark)
Salmen, Jan; Caup, Lukas; Igel, Christian
2011-01-01
-objective optimization. In this work, we build on a popular algorithm developed for realtime applications. It is originally based on the Census transform and benefits from this encoding for table-based matching and tracking of interest points. We propose to use the more universal Haar wavelet features instead...
Directory of Open Access Journals (Sweden)
Dexin Yu
2016-01-01
Full Text Available In order to optimize the signal timing for isolated intersection, a new method based on fuzzy programming approach is proposed in this paper. Considering the whole operation efficiency of the intersection comprehensively, traffic capacity, vehicle cycle delay, cycle stops, and exhaust emission are chosen as optimization goals to establish a multiobjective function first. Then fuzzy compromise programming approach is employed to give different weight coefficients to various optimization objectives for different traffic flow ratios states. And then the multiobjective function is converted to a single objective function. By using genetic algorithm, the optimized signal cycle and effective green time can be obtained. Finally, the performance of the traditional method and new method proposed in this paper is compared and analyzed through VISSIM software. It can be concluded that the signal timing optimized in this paper can effectively reduce vehicle delays and stops, which can improve traffic capacity of the intersection as well.
Mean-Variance portfolio optimization when each asset has individual uncertain exit-time
Directory of Open Access Journals (Sweden)
Reza Keykhaei
2016-12-01
Full Text Available The standard Markowitz Mean-Variance optimization model is a single-period portfolio selection approach where the exit-time (or the time-horizon is deterministic. In this paper we study the Mean-Variance portfolio selection problem with uncertain exit-time when each has individual uncertain xit-time, which generalizes the Markowitz's model. We provide some conditions under which the optimal portfolio of the generalized problem is independent of the exit-times distributions. Also, it is shown that under some general circumstances, the sets of optimal portfolios in the generalized model and the standard model are the same.
Optimal robustness of supervised learning from a noniterative point of view
Hu, Chia-Lun J.
1995-08-01
In most artificial neural network applications, (e.g. pattern recognition) if the dimension of the input vectors is much larger than the number of patterns to be recognized, generally, a one- layered, hard-limited perceptron is sufficient to do the recognition job. As long as the training input-output mapping set is numerically given, and as long as this given set satisfies a special linear-independency relation, the connection matrix to meet the supervised learning requirements can be solved by a noniterative, one-step, algebra method. The learning of this noniterative scheme is very fast (close to real-time learning) because the learning is one-step and noniterative. The recognition of the untrained patterns is very robust because a universal geometrical optimization process of selecting the solution can be applied to the learning process. This paper reports the theoretical foundation of this noniterative learning scheme and focuses the result at the optimal robustness analysis. A real-time character recognition scheme is then designed along this line. This character recognition scheme will be used (in a movie presentation) to demonstrate the experimental results of some theoretical parts reported in this paper.
An Optimized Structure on FPGA of Key Point Detection in SIFT Algorithm
Directory of Open Access Journals (Sweden)
Xu Chenyu
2016-01-01
Full Text Available SIFT algorithm is the most efficient and powerful algorithm to describe the features of images and it has been applied in many fields. In this paper, we propose an optimized method to realize the hardware implementation of the SIFT algorithm. We mainly discuss the structure of Data Generation here. A pipeline architecture is introduced to accelerate this optimized system. Parameters’ setting and approximation’s controlling in different image qualities and hardware resources are the focus of this paper. The results of experiments fully prove that this structure is real-time and effective, and provide consultative opinion to meet the different situations.
Liu, Xiaomei; Li, Shengtao; Zhang, Kanjian
2017-08-01
In this paper, we solve an optimal control problem for a class of time-invariant switched stochastic systems with multi-switching times, where the objective is to minimise a cost functional with different costs defined on the states. In particular, we focus on problems in which a pre-specified sequence of active subsystems is given and the switching times are the only control variables. Based on the calculus of variation, we derive the gradient of the cost functional with respect to the switching times on an especially simple form, which can be directly used in gradient descent algorithms to locate the optimal switching instants. Finally, a numerical example is given, highlighting the validity of the proposed methodology.
Optimal trading strategies—a time series approach
Bebbington, Peter A.; Kühn, Reimer
2016-05-01
Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.
Zubair, Mohammad; Nielsen, Eric; Luitjens, Justin; Hammond, Dana
2016-01-01
In the field of computational fluid dynamics, the Navier-Stokes equations are often solved using an unstructuredgrid approach to accommodate geometric complexity. Implicit solution methodologies for such spatial discretizations generally require frequent solution of large tightly-coupled systems of block-sparse linear equations. The multicolor point-implicit solver used in the current work typically requires a significant fraction of the overall application run time. In this work, an efficient implementation of the solver for graphics processing units is proposed. Several factors present unique challenges to achieving an efficient implementation in this environment. These include the variable amount of parallelism available in different kernel calls, indirect memory access patterns, low arithmetic intensity, and the requirement to support variable block sizes. In this work, the solver is reformulated to use standard sparse and dense Basic Linear Algebra Subprograms (BLAS) functions. However, numerical experiments show that the performance of the BLAS functions available in existing CUDA libraries is suboptimal for matrices representative of those encountered in actual simulations. Instead, optimized versions of these functions are developed. Depending on block size, the new implementations show performance gains of up to 7x over the existing CUDA library functions.
Computationally determining the salience of decision points for real-time wayfinding support
Directory of Open Access Journals (Sweden)
Makoto Takemiya
2012-06-01
Full Text Available This study introduces the concept of computational salience to explain the discriminatory efficacy of decision points, which in turn may have applications to providing real-time assistance to users of navigational aids. This research compared algorithms for calculating the computational salience of decision points and validated the results via three methods: high-salience decision points were used to classify wayfinders; salience scores were used to weight a conditional probabilistic scoring function for real-time wayfinder performance classification; and salience scores were correlated with wayfinding-performance metrics. As an exploratory step to linking computational and cognitive salience, a photograph-recognition experiment was conducted. Results reveal a distinction between algorithms useful for determining computational and cognitive saliences. For computational salience, information about the structural integration of decision points is effective, while information about the probability of decision-point traversal shows promise for determining cognitive salience. Limitations from only using structural information and motivations for future work that include non-structural information are elicited.
FPFH-based graph matching for 3D point cloud registration
Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua
2018-04-01
Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.
Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm
Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian
2018-03-01
In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.
Continuous and Discrete-Time Optimal Controls for an Isolated Signalized Intersection
Directory of Open Access Journals (Sweden)
Jiyuan Tan
2017-01-01
Full Text Available A classical control problem for an isolated oversaturated intersection is revisited with a focus on the optimal control policy to minimize total delay. The difference and connection between existing continuous-time planning models and recently proposed discrete-time planning models are studied. A gradient descent algorithm is proposed to convert the optimal control plan of the continuous-time model to the plan of the discrete-time model in many cases. Analytic proof and numerical tests for the algorithm are also presented. The findings shed light on the links between two kinds of models.
El-Malah, Yasser; Nazzal, Sami
2013-01-01
The objective of this work was to study the dissolution and mechanical properties of fast-dissolving films prepared from a tertiary mixture of pullulan, polyvinylpyrrolidone and hypromellose. Disintegration studies were performed in real-time by probe spectroscopy to detect the onset of film disintegration. Tensile strength and elastic modulus of the films were measured by texture analysis. Disintegration time of the films ranged from 21 to 105 seconds whereas their mechanical properties ranged from approximately 2 to 49 MPa for tensile strength and 1 to 21 MPa% for young's modulus. After generating polynomial models correlating the variables using a D-Optimal mixture design, an optimal formulation with desired responses was proposed by the statistical package. For validation, a new film formulation loaded with diclofenac sodium based on the optimized composition was prepared and tested for dissolution and tensile strength. Dissolution of the optimized film was found to commence almost immediately with 50% of the drug released within one minute. Tensile strength and young's modulus of the film were 11.21 MPa and 6, 78 MPa%, respectively. Real-time spectroscopy in conjunction with statistical design were shown to be very efficient for the optimization and development of non-conventional intraoral delivery system such as fast dissolving films.
An Optimization Framework for Dynamic, Distributed Real-Time Systems
Eckert, Klaus; Juedes, David; Welch, Lonnie; Chelberg, David; Bruggerman, Carl; Drews, Frank; Fleeman, David; Parrott, David; Pfarr, Barbara
2003-01-01
Abstract. This paper presents a model that is useful for developing resource allocation algorithms for distributed real-time systems .that operate in dynamic environments. Interesting aspects of the model include dynamic environments, utility and service levels, which provide a means for graceful degradation in resource-constrained situations and support optimization of the allocation of resources. The paper also provides an allocation algorithm that illustrates how to use the model for producing feasible, optimal resource allocations.
Road maintenance optimization through a discrete-time semi-Markov decision process
International Nuclear Information System (INIS)
Zhang Xueqing; Gao Hui
2012-01-01
Optimization models are necessary for efficient and cost-effective maintenance of a road network. In this regard, road deterioration is commonly modeled as a discrete-time Markov process such that an optimal maintenance policy can be obtained based on the Markov decision process, or as a renewal process such that an optimal maintenance policy can be obtained based on the renewal theory. However, the discrete-time Markov process cannot capture the real time at which the state transits while the renewal process considers only one state and one maintenance action. In this paper, road deterioration is modeled as a semi-Markov process in which the state transition has the Markov property and the holding time in each state is assumed to follow a discrete Weibull distribution. Based on this semi-Markov process, linear programming models are formulated for both infinite and finite planning horizons in order to derive optimal maintenance policies to minimize the life-cycle cost of a road network. A hypothetical road network is used to illustrate the application of the proposed optimization models. The results indicate that these linear programming models are practical for the maintenance of a road network having a large number of road segments and that they are convenient to incorporate various constraints on the decision process, for example, performance requirements and available budgets. Although the optimal maintenance policies obtained for the road network are randomized stationary policies, the extent of this randomness in decision making is limited. The maintenance actions are deterministic for most states and the randomness in selecting actions occurs only for a few states.
Real-time Collision Avoidance and Path Optimizer for Semi-autonomous UAVs.
Hawary, A. F.; Razak, N. A.
2018-05-01
Whilst UAV offers a potentially cheaper and more localized observation platform than current satellite or land-based approaches, it requires an advance path planner to reveal its true potential, particularly in real-time missions. Manual control by human will have limited line-of-sights and prone to errors due to careless and fatigue. A good alternative solution is to equip the UAV with semi-autonomous capabilities that able to navigate via a pre-planned route in real-time fashion. In this paper, we propose an easy-and-practical path optimizer based on the classical Travelling Salesman Problem and adopts a brute force search method to re-optimize the route in the event of collisions using range finder sensor. The former utilizes a Simple Genetic Algorithm and the latter uses Nearest Neighbour algorithm. Both algorithms are combined to optimize the route and avoid collision at once. Although many researchers proposed various path planning algorithms, we find that it is difficult to integrate on a basic UAV model and often lacks of real-time collision detection optimizer. Therefore, we explore a practical benefit from this approach using on-board Arduino and Ardupilot controllers by manually emulating the motion of an actual UAV model prior to test on the flying site. The result showed that the range finder sensor provides a real-time data to the algorithm to find a collision-free path and eventually optimized the route successfully.
Directory of Open Access Journals (Sweden)
Yogang Singh
2018-03-01
Full Text Available The growing need of ocean surveying and exploration for scientific and industrial application has led to the requirement of routing strategies for ocean vehicles which are optimal in nature. Most of the op-timal path planning for marine vehicles had been conducted offline in a self-made environment. This paper takes into account a practical marine environment, i.e. Portsmouth Harbour, for finding an optimal path in terms of computational time between source and end points on a real time map for an USV. The current study makes use of a grid map generated from original and uses a Dijkstra algorithm to find the shortest path for a single USV. In order to benchmark the study, a path planning study using a well-known local path planning method artificial path planning (APF has been conducted in a real time marine environment and effectiveness is measured in terms of path length and computational time.
Directory of Open Access Journals (Sweden)
Ahmet Demir
2017-01-01
Full Text Available In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an important role on providing software related techniques to improve the associated literature. Today, intelligent optimization techniques based on Artificial Intelligence are widely used for optimization problems. The objective of this paper is to provide a comparative study on the employment of classical optimization solutions and Artificial Intelligence solutions for enabling readers to have idea about the potential of intelligent optimization techniques. At this point, two recently developed intelligent optimization algorithms, Vortex Optimization Algorithm (VOA and Cognitive Development Optimization Algorithm (CoDOA, have been used to solve some multidisciplinary optimization problems provided in the source book Thomas' Calculus 11th Edition and the obtained results have compared with classical optimization solutions.
Ye, Jing; Dang, Yaoguo; Li, Bingjun
2018-01-01
Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.
Poux, F.; Neuville, R.; Hallot, P.; Van Wersch, L.; Luczfalvy Jancsó, A.; Billen, R.
2017-05-01
While virtual copies of the real world tend to be created faster than ever through point clouds and derivatives, their working proficiency by all professionals' demands adapted tools to facilitate knowledge dissemination. Digital investigations are changing the way cultural heritage researchers, archaeologists, and curators work and collaborate to progressively aggregate expertise through one common platform. In this paper, we present a web application in a WebGL framework accessible on any HTML5-compatible browser. It allows real time point cloud exploration of the mosaics in the Oratory of Germigny-des-Prés, and emphasises the ease of use as well as performances. Our reasoning engine is constructed over a semantically rich point cloud data structure, where metadata has been injected a priori. We developed a tool that directly allows semantic extraction and visualisation of pertinent information for the end users. It leads to efficient communication between actors by proposing optimal 3D viewpoints as a basis on which interactions can grow.
Time Optimal Control Laws for Bilinear Systems
Directory of Open Access Journals (Sweden)
Salim Bichiou
2018-01-01
Full Text Available The aim of this paper is to determine the feedforward and state feedback suboptimal time control for a subset of bilinear systems, namely, the control sequence and reaching time. This paper proposes a method that uses Block pulse functions as an orthogonal base. The bilinear system is projected along that base. The mathematical integration is transformed into a product of matrices. An algebraic system of equations is obtained. This system together with specified constraints is treated as an optimization problem. The parameters to determine are the final time, the control sequence, and the states trajectories. The obtained results via the newly proposed method are compared to known analytical solutions.
The Optimal Timing of Adoption of a Green Technology
International Nuclear Information System (INIS)
Cunha-e-Sa, M.A.; Reis, A.B.
2007-01-01
We study the optimal timing of adoption of a cleaner technology and its effects on the rate of growth of an economy in the context of an AK endogenous growth model. We show that the results depend upon the behavior of the marginal utility of environmental quality with respect to consumption. When it is increasing, we derive the capital level at the optimal timing of adoption. We show that this capital threshold is independent of the initial conditions on the stock of capital, implying that capital-poor countries tend to take longer to adopt. Also, country-specific characteristics, as the existence of high barriers to adoption, may lead to different capital thresholds for different countries. If the marginal utility of environmental quality decreases with consumption, a country should never delay adoption; the optimal policy is either to adopt immediately or, if adoption costs are t oo high , to never adopt. The policy implications of these results are discussed in the context of the international debate surrounding the environmental political agenda
Optimal 25-Point Finite-Difference Subgridding Techniques for the 2D Helmholtz Equation
Directory of Open Access Journals (Sweden)
Tingting Wu
2016-01-01
Full Text Available We present an optimal 25-point finite-difference subgridding scheme for solving the 2D Helmholtz equation with perfectly matched layer (PML. This scheme is second order in accuracy and pointwise consistent with the equation. Subgrids are used to discretize the computational domain, including the interior domain and the PML. For the transitional node in the interior domain, the finite difference equation is formulated with ghost nodes, and its weight parameters are chosen by a refined choice strategy based on minimizing the numerical dispersion. Numerical experiments are given to illustrate that the newly proposed schemes can produce highly accurate seismic modeling results with enhanced efficiency.
Trajectory Optimization Based on Multi-Interval Mesh Refinement Method
Directory of Open Access Journals (Sweden)
Ningbo Li
2017-01-01
Full Text Available In order to improve the optimization accuracy and convergence rate for trajectory optimization of the air-to-air missile, a multi-interval mesh refinement Radau pseudospectral method was introduced. This method made the mesh endpoints converge to the practical nonsmooth points and decreased the overall collocation points to improve convergence rate and computational efficiency. The trajectory was divided into four phases according to the working time of engine and handover of midcourse and terminal guidance, and then the optimization model was built. The multi-interval mesh refinement Radau pseudospectral method with different collocation points in each mesh interval was used to solve the trajectory optimization model. Moreover, this method was compared with traditional h method. Simulation results show that this method can decrease the dimensionality of nonlinear programming (NLP problem and therefore improve the efficiency of pseudospectral methods for solving trajectory optimization problems.
International Nuclear Information System (INIS)
He, Yi; Scheraga, Harold A.; Liwo, Adam
2015-01-01
Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field
Lütje, Susanne; Blex, Sebastian; Gomez, Benedikt; Schaarschmidt, Benedikt M; Umutlu, Lale; Forsting, Michael; Jentzen, Walter; Bockisch, Andreas; Poeppel, Thorsten D; Wetter, Axel
2016-01-01
The aim of this optimization study was to minimize the acquisition time of 68Ga-HBED-CC-PSMA positron emission tomography/magnetic resonance imaging (PET/MRI) in patients with local and metastatic prostate cancer (PCa) to obtain a sufficient image quality and quantification accuracy without any appreciable loss. Twenty patients with PCa were administered intravenously with the 68Ga-HBED-CC-PSMA ligand (mean activity 99 MBq/patient, range 76-148 MBq) and subsequently underwent PET/MRI at, on average, 168 min (range 77-320 min) after injection. PET and MR imaging data were acquired simultaneously. PET acquisition was performed in list mode and PET images were reconstructed at different time intervals (1, 2, 4, 6, 8, and 10 min). Data were analyzed regarding radiotracer uptake in tumors and muscle tissue and PET image quality. Tumor uptake was quantified in terms of the maximum and mean standardized uptake value (SUVmax, SUVmean) within a spherical volume of interest (VOI). Reference VOIs were drawn in the gluteus maximus muscle on the right side. PET image quality was evaluated by experienced nuclear physicians/radiologists using a five-point ordinal scale from 5-1 (excellent-insufficient). Lesion detectability linearly increased with increasing acquisition times, reaching its maximum at PET acquisition times of 4 min. At this image acquisition time, tumor lesions in 19/20 (95%) patients were detected. PET image quality showed a positive correlation with increasing acquisition time, reaching a plateau at 4-6 min image acquisition. Both SUVmax and SUVmean correlated inversely with acquisition time and reached a plateau at acquisition times after 4 min. In the applied image acquisition settings, the optimal acquisition time of 68Ga-PSMA-ligand PET/MRI in patients with local and metastatic PCa was identified to be 4 min per bed position. At this acquisition time, PET image quality and lesion detectability reach a maximum while SUVmax and SUVmean do not change
Directory of Open Access Journals (Sweden)
Susanne Lütje
Full Text Available The aim of this optimization study was to minimize the acquisition time of 68Ga-HBED-CC-PSMA positron emission tomography/magnetic resonance imaging (PET/MRI in patients with local and metastatic prostate cancer (PCa to obtain a sufficient image quality and quantification accuracy without any appreciable loss.Twenty patients with PCa were administered intravenously with the 68Ga-HBED-CC-PSMA ligand (mean activity 99 MBq/patient, range 76-148 MBq and subsequently underwent PET/MRI at, on average, 168 min (range 77-320 min after injection. PET and MR imaging data were acquired simultaneously. PET acquisition was performed in list mode and PET images were reconstructed at different time intervals (1, 2, 4, 6, 8, and 10 min. Data were analyzed regarding radiotracer uptake in tumors and muscle tissue and PET image quality. Tumor uptake was quantified in terms of the maximum and mean standardized uptake value (SUVmax, SUVmean within a spherical volume of interest (VOI. Reference VOIs were drawn in the gluteus maximus muscle on the right side. PET image quality was evaluated by experienced nuclear physicians/radiologists using a five-point ordinal scale from 5-1 (excellent-insufficient.Lesion detectability linearly increased with increasing acquisition times, reaching its maximum at PET acquisition times of 4 min. At this image acquisition time, tumor lesions in 19/20 (95% patients were detected. PET image quality showed a positive correlation with increasing acquisition time, reaching a plateau at 4-6 min image acquisition. Both SUVmax and SUVmean correlated inversely with acquisition time and reached a plateau at acquisition times after 4 min.In the applied image acquisition settings, the optimal acquisition time of 68Ga-PSMA-ligand PET/MRI in patients with local and metastatic PCa was identified to be 4 min per bed position. At this acquisition time, PET image quality and lesion detectability reach a maximum while SUVmax and SUVmean do not change
On using priced timed automata to achieve optimal scheduling
DEFF Research Database (Denmark)
Rasmussen, Jacob Illum; Larsen, Kim Guldstrand; Subramani, K.
2006-01-01
This contribution reports on the considerable effort made recently towards extending and applying well-established timed automata technology to optimal scheduling and planning problems. The effort of the authors in this direction has to a large extent been carried out as part of the European proj...... of so-called priced timed automata....
Real-time spatial optimization : based on the application in wood supply chain management
International Nuclear Information System (INIS)
Scholz, J.
2010-01-01
Real-time spatial optimization - a combination of Geographical Information Science and Technology and Operations Research - is capable of generating optimized solutions to given spatial problems in real-time. The basic concepts to develop a real-time spatial optimization system are outlined in this thesis. Geographic Information Science delivers the foundations for acquiring, storing, manipulating, visualizing and analyzing spatial information. In order to develop a system that consists of several independent components the concept of Service Oriented Architectures is applied. This facilitates communication between software systems utilizing standardized services that ensure interoperability. Thus, standards in the field of Geographic Information are inevitable for real-time spatial optimization. By exploiting the ability of mobile devices to determine the own position paired with standardized services Location Based Services are created. They are of interest in order to gather real-time data from mobile devices that are of importance for the optimization process itself. To optimize a given spatial problem, the universe of discourse has to be modeled accordingly. For the problem addressed in this thesis - Wood Supply Chain management - Graph theory is used. In addition, the problem of Wood Supply Chain management can be represented by a specific mathematical problem class, the Vehicle Routing problem - specifically the Vehicle Routing Problem with Pickup and Delivery and Time Windows. To optimize this problem class, exact and approximate solution techniques exist. Exact algorithms provide optimal solutions and guarantee their optimally, whereas approximate techniques - approximation algorithms or heuristics - do not guarantee that a global optimum is found. Nevertheless, the are capable of handling large problem instances in reasonable time. For optimizing the Wood Supply Chain Adaptive Large Neighborhood Search is selected as appropriate optimization technique
Optimizing capital and time expenditures for drilling service operations
Energy Technology Data Exchange (ETDEWEB)
Zazovskiy, F Ya; Soltysyak, T I
1980-01-01
The operational efficiency of drilling services operations management are examined. The structure of time expenditure is analyzed for repair operations according to equipment type employed by the Ivano-Frankovsk Drilling Management under the Ukrneft' enterprise during 1977. The results of this analysis are weighed against a series of service operations carried out at industrial enterprises and connected with technical disruptions. Some of the cases examined include service competion operations outside of the industrial units when technical processes are disrupted only for the change of equipment which has outlived its usefulness and is no longer in series production. First of all, time expended for repair work can be reduced to zero during the drilling of shallow wells which do not require extensive drilling time. The actual savings, both in time and money, as far as repair work is concerned, hinges on the actual time factor for total oil depetion. An equation is provided for optimal time expenditure necessary for repair work and equipment replacement. An actual example is given from the Dolinsk UBR (Drillin Management) under the Ukrneft' enterprise where time spent on actual service operations has appeared to be less than the optimal figure cited in the above material. This is possible because of increased capital expenditures.
Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad
2017-11-01
Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.
Optimization of Single Point Incremental Forming of Al5052-O Sheet
Energy Technology Data Exchange (ETDEWEB)
Kim, Chan Il; Xiao, Xiao; Do, Van Cuong; Kim, Young Suk [Kyungpook Nat’l Univ., Daegu (Korea, Republic of)
2017-03-15
Single point incremental forming (SPIF) is a sheet-forming technique. It is a die-less sheet metal manufacturing process for rapid prototyping and small batch production. The Critical parameters in the forming process include tool diameter, step depth, feed rate, spindle speed, etc. In this study, these parameters and the die shape corresponding to the Varying Wall Angle Conical Frustum(VWACF) model were used for forming 0.8mm in thick Al5052-O sheets. The Taguchi method of Experiments of Design (DOE) and Grey relational optimization were used to determine the optimum parameters in SPIF. A response study was performed on formability, spring back, and thickness reduction. The research shows that the optimum combination of these parameters that yield best performance of SPIF is as follows: tool diameter, 6mm; spin speed, 60rpm; step depth, 0.3mm; and feed rate, 500mm/min.
Real-time parameter optimization based on neural network for smart injection molding
Lee, H.; Liau, Y.; Ryu, K.
2018-03-01
The manufacturing industry has been facing several challenges, including sustainability, performance and quality of production. Manufacturers attempt to enhance the competitiveness of companies by implementing CPS (Cyber-Physical Systems) through the convergence of IoT(Internet of Things) and ICT(Information & Communication Technology) in the manufacturing process level. Injection molding process has a short cycle time and high productivity. This features have been making it suitable for mass production. In addition, this process is used to produce precise parts in various industry fields such as automobiles, optics and medical devices. Injection molding process has a mixture of discrete and continuous variables. In order to optimized the quality, variables that is generated in the injection molding process must be considered. Furthermore, Optimal parameter setting is time-consuming work to predict the optimum quality of the product. Since the process parameter cannot be easily corrected during the process execution. In this research, we propose a neural network based real-time process parameter optimization methodology that sets optimal process parameters by using mold data, molding machine data, and response data. This paper is expected to have academic contribution as a novel study of parameter optimization during production compare with pre - production parameter optimization in typical studies.
Directory of Open Access Journals (Sweden)
Maryam M Shanechi
Full Text Available Real-time brain-machine interfaces (BMI have focused on either estimating the continuous movement trajectory or target intent. However, natural movement often incorporates both. Additionally, BMIs can be modeled as a feedback control system in which the subject modulates the neural activity to move the prosthetic device towards a desired target while receiving real-time sensory feedback of the state of the movement. We develop a novel real-time BMI using an optimal feedback control design that jointly estimates the movement target and trajectory of monkeys in two stages. First, the target is decoded from neural spiking activity before movement initiation. Second, the trajectory is decoded by combining the decoded target with the peri-movement spiking activity using an optimal feedback control design. This design exploits a recursive Bayesian decoder that uses an optimal feedback control model of the sensorimotor system to take into account the intended target location and the sensory feedback in its trajectory estimation from spiking activity. The real-time BMI processes the spiking activity directly using point process modeling. We implement the BMI in experiments consisting of an instructed-delay center-out task in which monkeys are presented with a target location on the screen during a delay period and then have to move a cursor to it without touching the incorrect targets. We show that the two-stage BMI performs more accurately than either stage alone. Correct target prediction can compensate for inaccurate trajectory estimation and vice versa. The optimal feedback control design also results in trajectories that are smoother and have lower estimation error. The two-stage decoder also performs better than linear regression approaches in offline cross-validation analyses. Our results demonstrate the advantage of a BMI design that jointly estimates the target and trajectory of movement and more closely mimics the sensorimotor control system.
Optimization of ramp area aircraft push back time windows in the presence of uncertainty
Coupe, William Jeremy
It is well known that airport surface traffic congestion at major airports is responsible for increased taxi-out times, fuel burn and excess emissions and there is potential to mitigate these negative consequences through optimizing airport surface traffic operations. Due to a highly congested voice communication channel between pilots and air traffic controllers and a data communication channel that is used only for limited functions, one of the most viable near-term strategies for improvement of the surface traffic is issuing a push back advisory to each departing aircraft. This dissertation focuses on the optimization of a push back time window for each departing aircraft. The optimization takes into account both spatial and temporal uncertainties of ramp area aircraft trajectories. The uncertainties are described by a stochastic kinematic model of aircraft trajectories, which is used to infer distributions of combinations of push back times that lead to conflict among trajectories from different gates. The model is validated and the distributions are included in the push back time window optimization. Under the assumption of a fixed taxiway spot schedule, the computed push back time windows can be integrated with a higher level taxiway scheduler to optimize the flow of traffic from the gate to the departure runway queue. To enable real-time decision making the computational time of the push back time window optimization is critical and is analyzed throughout.
Accelerating ROP detector layout optimization
International Nuclear Information System (INIS)
Kastanya, D.; Fodor, B.
2012-01-01
The ADORE (Alternating Detector layout Optimization for REgional overpower protection system) algorithm for performing the optimization of regional overpower protection (ROP) system for CANDU® reactors have been recently developed. The simulated annealing (SA) stochastic optimization technique is utilized to come up with a quasi optimized detector layout for the ROP systems. Within each simulated annealing history, the objective function is calculated as a function of the trip set point (TSP) corresponding to the detector layout for that particular history. The evaluation of the TSP is done probabilistically using the ROVER-F code. Since during each optimization execution thousands of candidate detector layouts are evaluated, the overall optimization process is time consuming. Since for each ROVER-F evaluation the number of fuelling ripples controls the execution time, reducing the number of fuelling ripples used during the calculation of TSP will reduce the overall optimization execution time. This approach has been investigated and the results are presented in this paper. The challenge is to construct a set of representative fuelling ripples which will significantly speedup the optimization process while guaranteeing that the resulting detector layout has similar quality to the ones produced when the complete set of fuelling ripples is employed. Results presented in this paper indicate that a speedup of up to around 40 times is attainable when this approach is utilized. (author)
The implication of missing the optimal-exercise time of an American option
Chockalingam, A.; Feng, H.
2015-01-01
The optimal-exercise policy of an American option dictates when the option should be exercised. In this paper, we consider the implications of missing the optimal exercise time of an American option. For the put option, this means holding the option until it is deeper in-the-money when the optimal
Directory of Open Access Journals (Sweden)
Huanhuan Hu
2016-03-01
Full Text Available Abstract Background We sought to establish the optimal waist circumference (WC cut-off point for predicting diabetes mellitus (DM and to compare the predictive ability of the metabolic syndrome (MetS criteria of the Joint Interim Statement (JIS and the Japanese Committee of the Criteria for MetS (JCCMS for DM in Japanese. Methods Participants of the Japan Epidemiology Collaboration on Occupational Health Study, who were aged 20–69 years and free of DM at baseline (n = 54,980, were followed-up for a maximum of 6 years. Time-dependent receiver operating characteristic analysis was used to determine the optimal cut-off points of WC for predicting DM. Time-dependent sensitivity, specificity, and positive and negative predictive values for the prediction of DM were compared between the JIS and JCCMS MetS criteria. Results During 234,926 person-years of follow-up, 3180 individuals developed DM. Receiver operating characteristic analysis suggested that the most suitable cut-off point of WC for predicting incident DM was 85 cm for men and 80 cm for women. MetS was associated with 3–4 times increased hazard for developing DM in men and 7–9 times in women. Of the MetS criteria tested, the JIS criteria using our proposed WC cut-off points (85 cm for men and 80 cm for women had the highest sensitivity (54.5 % for men and 43.5 % for women for predicting DM. The sensitivity and specificity of the JCCMS MetS criteria were ~37.7 and 98.9 %, respectively. Conclusion Data from the present large cohort of workers suggest that WC cut-offs of 85 cm for men and 80 cm for women may be appropriate for predicting DM for Japanese. The JIS criteria can detect more people who later develop DM than does the JCCMS criteria.
Craig, Darren G; Kitto, Laura; Zafar, Sara; Reid, Thomas W D J; Martin, Kirsty G; Davidson, Janice S; Hayes, Peter C; Simpson, Kenneth J
2014-09-01
The innate immune system is profoundly dysregulated in paracetamol (acetaminophen)-induced liver injury. The neutrophil-lymphocyte ratio (NLR) is a simple bedside index with prognostic value in a number of inflammatory conditions. To evaluate the prognostic accuracy of the NLR in patients with significant liver injury following single time-point and staggered paracetamol overdoses. Time-course analysis of 100 single time-point and 50 staggered paracetamol overdoses admitted to a tertiary liver centre. Timed laboratory samples were correlated with time elapsed after overdose or admission, respectively, and the NLR was calculated. A total of 49/100 single time-point patients developed hepatic encephalopathy (HE). Median NLRs were higher at both 72 (P=0.0047) and 96 h after overdose (P=0.0041) in single time-point patients who died or were transplanted. Maximum NLR values by 96 h were associated with increasing HE grade (P=0.0005). An NLR of more than 16.7 during the first 96 h following overdose was independently associated with the development of HE [odds ratio 5.65 (95% confidence interval 1.67-19.13), P=0.005]. Maximum NLR values by 96 h were strongly associated with the requirement for intracranial pressure monitoring (Pparacetamol overdoses. Future studies should assess the value of incorporating the NLR into existing prognostic and triage indices of single time-point paracetamol overdose.
Optimal time-domain technique for pulse width modulation in power electronics
Directory of Open Access Journals (Sweden)
I. Mayergoyz
2018-05-01
Full Text Available Optimal time-domain technique for pulse width modulation is presented. It is based on exact and explicit analytical solutions for inverter circuits, obtained for any sequence of input voltage rectangular pulses. Two optimal criteria are discussed and illustrated by numerical examples.
Mathematical programming methods for large-scale topology optimization problems
DEFF Research Database (Denmark)
Rojas Labanda, Susana
for mechanical problems, but has rapidly extended to many other disciplines, such as fluid dynamics and biomechanical problems. However, the novelty and improvements of optimization methods has been very limited. It is, indeed, necessary to develop of new optimization methods to improve the final designs......, and at the same time, reduce the number of function evaluations. Nonlinear optimization methods, such as sequential quadratic programming and interior point solvers, have almost not been embraced by the topology optimization community. Thus, this work is focused on the introduction of this kind of second...... for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...
Dual-phase helical CT using bolus triggering technique: optimization of transition time
International Nuclear Information System (INIS)
Choi, Young Ho; Kim, Tae Kyoung; Park, Byung Kwan; Koh, Young Hwan; Han, Joon Koo; Choi, Byung Ihn
1999-01-01
To optimize the transition time between the triggering point in monitoring scanning and the initiation of diagnostic hepatic arterial phase (HAP) scanning in hepatic spiral CT, using a bolus triggering technique. One hundred consecutive patients with focal hepatic lesion were included in this study. Patients were randomized into two groups. Transition times of 7 and 11 seconds were used in group 1 and 2, respectively. In all patients, bolus triggered HAP spiral CT was obtained using a semi-automatic bolus tracking program after the injection of 120mL of non-ionic contrast media at a rate of 3mL/sec. When aortic enhancement reached 90 HU, diagnostic HAP scanning began after a given transition time. From images of group 1 and group 2, the degree of parenchymal enhancement of the liver and tumor-to-liver attenuation difference were measured. Also, for qualitative analysis, conspicuity of the hepatic artery and hypervascular tumor was scored and analyzed. Hepatic parenchymal enhancement on HAP was 12.07 + /-6.44 HU in group 1 and 16.03 + /-5.80 HU in group 2 (p .05). In the evaluation of conspicuity of hepatic artery, there was no statistically significant difference between the two groups (p > .05). The conspicuity of hypervascular tumors in group 2 was higher than in group 1 (p < .05). HAP spiral CT using a bolus triggering technique with a transition time of 11 seconds provides better HAP images than when the transition time is 7 seconds
Chaos Time Series Prediction Based on Membrane Optimization Algorithms
Directory of Open Access Journals (Sweden)
Meng Li
2015-01-01
Full Text Available This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ,m and least squares support vector machine (LS-SVM (γ,σ by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE, root mean square error (RMSE, and mean absolute percentage error (MAPE.
A deterministic algorithm for fitting a step function to a weighted point-set
Fournier, Hervé
2013-01-01
Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance
Optimizing Time Windows For Managing Export Container Arrivals At Chinese Container Terminals
DEFF Research Database (Denmark)
Chen, Gang; Yang, Zhongzhen
2010-01-01
window management programme that is widely used in Chinese terminals to facilitate terminal and truck delivery operations. Firstly, the arrangement of time windows is assumed to follow the principle of minimizing transport costs. A cost function is defined that includes the costs of truck and driver...... waiting time, fuel consumption associated with truck idling, storage time of the containerized cargos and yard fee. Secondly, to minimize the total cost, a heuristic is developed based on a genetic algorithm to find a near optimal time window arrangement. The optimized solution involves the position...
Two-craft Coulomb formation study about circular orbits and libration points
Inampudi, Ravi Kishore
This dissertation investigates the dynamics and control of a two-craft Coulomb formation in circular orbits and at libration points; it addresses relative equilibria, stability and optimal reconfigurations of such formations. The relative equilibria of a two-craft tether formation connected by line-of-sight elastic forces moving in circular orbits and at libration points are investigated. In circular Earth orbits and Earth-Moon libration points, the radial, along-track, and orbit normal great circle equilibria conditions are found. An example of modeling the tether force using Coulomb force is discussed. Furthermore, the non-great-circle equilibria conditions for a two-spacecraft tether structure in circular Earth orbit and at collinear libration points are developed. Then the linearized dynamics and stability analysis of a 2-craft Coulomb formation at Earth-Moon libration points are studied. For orbit-radial equilibrium, Coulomb forces control the relative distance between the two satellites. The gravity gradient torques on the formation due to the two planets help stabilize the formation. Similar analysis is performed for along-track and orbit-normal relative equilibrium configurations. Where necessary, the craft use a hybrid thrusting-electrostatic actuation system. The two-craft dynamics at the libration points provide a general framework with circular Earth orbit dynamics forming a special case. In the presence of differential solar drag perturbations, a Lyapunov feedback controller is designed to stabilize a radial equilibrium, two-craft Coulomb formation at collinear libration points. The second part of the thesis investigates optimal reconfigurations of two-craft Coulomb formations in circular Earth orbits by applying nonlinear optimal control techniques. The objective of these reconfigurations is to maneuver the two-craft formation between two charged equilibria configurations. The reconfiguration of spacecraft is posed as an optimization problem using the
Energy Technology Data Exchange (ETDEWEB)
Dall' Anese, Emiliano; Dhople, Sairaj V.; Giannakis, Georgios B.
2015-07-01
This paper considers a collection of networked nonlinear dynamical systems, and addresses the synthesis of feedback controllers that seek optimal operating points corresponding to the solution of pertinent network-wide optimization problems. Particular emphasis is placed on the solution of semidefinite programs (SDPs). The design of the feedback controller is grounded on a dual e-subgradient approach, with the dual iterates utilized to dynamically update the dynamical-system reference signals. Global convergence is guaranteed for diminishing stepsize rules, even when the reference inputs are updated at a faster rate than the dynamical-system settling time. The application of the proposed framework to the control of power-electronic inverters in AC distribution systems is discussed. The objective is to bridge the time-scale separation between real-time inverter control and network-wide optimization. Optimization objectives assume the form of SDP relaxations of prototypical AC optimal power flow problems.
Optimal Time to Invest Energy Storage System under Uncertainty Conditions
Directory of Open Access Journals (Sweden)
Yongma Moon
2014-04-01
Full Text Available This paper proposes a model to determine the optimal investment time for energy storage systems (ESSs in a price arbitrage trade application under conditions of uncertainty over future profits. The adoption of ESSs can generate profits from price arbitrage trade, which are uncertain because the future marginal prices of electricity will change depending on supply and demand. In addition, since the investment is optional, an investor can delay adopting an ESS until it becomes profitable, and can decide the optimal time. Thus, when we evaluate this investment, we need to incorporate the investor’s option which is not captured by traditional evaluation methods. In order to incorporate these aspects, we applied real option theory to our proposed model, which provides an optimal investment threshold. Our results concerning the optimal time to invest show that if future profits that are expected to be obtained from arbitrage trade become more uncertain, an investor needs to wait longer to invest. Also, improvement in efficiency of ESSs can reduce the uncertainty of arbitrage profit and, consequently, the reduced uncertainty enables earlier ESS investment, even for the same power capacity. Besides, when a higher rate of profits is expected and ESS costs are higher, an investor needs to wait longer. Also, by comparing a widely used net present value model to our real option model, we show that the net present value method underestimates the value for ESS investment and misleads the investor to make an investment earlier.
Design and Implementation of Numerical Linear Algebra Algorithms on Fixed Point DSPs
Directory of Open Access Journals (Sweden)
Gene Frantz
2007-01-01
Full Text Available Numerical linear algebra algorithms use the inherent elegance of matrix formulations and are usually implemented using C/C++ floating point representation. The system implementation is faced with practical constraints because these algorithms usually need to run in real time on fixed point digital signal processors (DSPs to reduce total hardware costs. Converting the simulation model to fixed point arithmetic and then porting it to a target DSP device is a difficult and time-consuming process. In this paper, we analyze the conversion process. We transformed selected linear algebra algorithms from floating point to fixed point arithmetic, and compared real-time requirements and performance between the fixed point DSP and floating point DSP algorithm implementations. We also introduce an advanced code optimization and an implementation by DSP-specific, fixed point C code generation. By using the techniques described in the paper, speed can be increased by a factor of up to 10 compared to floating point emulation on fixed point hardware.
Optimizing the search for transiting planets in long time series
Ofir, Aviv
2014-01-01
Context. Transit surveys, both ground- and space-based, have already accumulated a large number of light curves that span several years. Aims: The search for transiting planets in these long time series is computationally intensive. We wish to optimize the search for both detection and computational efficiencies. Methods: We assume that the searched systems can be described well by Keplerian orbits. We then propagate the effects of different system parameters to the detection parameters. Results: We show that the frequency information content of the light curve is primarily determined by the duty cycle of the transit signal, and thus the optimal frequency sampling is found to be cubic and not linear. Further optimization is achieved by considering duty-cycle dependent binning of the phased light curve. By using the (standard) BLS, one is either fairly insensitive to long-period planets or less sensitive to short-period planets and computationally slower by a significant factor of ~330 (for a 3 yr long dataset). We also show how the physical system parameters, such as the host star's size and mass, directly affect transit detection. This understanding can then be used to optimize the search for every star individually. Conclusions: By considering Keplerian dynamics explicitly rather than implicitly one can optimally search the BLS parameter space. The presented Optimal BLS enhances the detectability of both very short and very long period planets, while allowing such searches to be done with much reduced resources and time. The Matlab/Octave source code for Optimal BLS is made available. The MATLAB code is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/561/A138
Online gaming for learning optimal team strategies in real time
Hudas, Gregory; Lewis, F. L.; Vamvoudakis, K. G.
2010-04-01
This paper first presents an overall view for dynamical decision-making in teams, both cooperative and competitive. Strategies for team decision problems, including optimal control, zero-sum 2-player games (H-infinity control) and so on are normally solved for off-line by solving associated matrix equations such as the Riccati equation. However, using that approach, players cannot change their objectives online in real time without calling for a completely new off-line solution for the new strategies. Therefore, in this paper we give a method for learning optimal team strategies online in real time as team dynamical play unfolds. In the linear quadratic regulator case, for instance, the method learns the Riccati equation solution online without ever solving the Riccati equation. This allows for truly dynamical team decisions where objective functions can change in real time and the system dynamics can be time-varying.
International Nuclear Information System (INIS)
Sutrisno; Widowati; Solikhin
2016-01-01
In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well. (paper)
Wang, Xinghu; Hong, Yiguang; Yi, Peng; Ji, Haibo; Kang, Yu
2017-05-24
In this paper, a distributed optimization problem is studied for continuous-time multiagent systems with unknown-frequency disturbances. A distributed gradient-based control is proposed for the agents to achieve the optimal consensus with estimating unknown frequencies and rejecting the bounded disturbance in the semi-global sense. Based on convex optimization analysis and adaptive internal model approach, the exact optimization solution can be obtained for the multiagent system disturbed by exogenous disturbances with uncertain parameters.
A deterministic algorithm for fitting a step function to a weighted point-set
Fournier, Hervé
2013-02-01
Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.
Baisden, W. T.; Canessa, S.
2013-01-01
In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of 14C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of ∼500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of 14C to determine residence times, by estimating the amount of ‘bomb 14C’ incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point 14C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C (‘passive fraction’), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.
International Nuclear Information System (INIS)
Baisden, W.T.; Canessa, S.
2013-01-01
In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of 14 C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of ∼500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of 14 C to determine residence times, by estimating the amount of ‘bomb 14 C’ incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point 14 C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C (‘passive fraction’), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.
A one-layer recurrent neural network for constrained nonconvex optimization.
Li, Guocheng; Yan, Zheng; Wang, Jun
2015-01-01
In this paper, a one-layer recurrent neural network is proposed for solving nonconvex optimization problems subject to general inequality constraints, designed based on an exact penalty function method. It is proved herein that any neuron state of the proposed neural network is convergent to the feasible region in finite time and stays there thereafter, provided that the penalty parameter is sufficiently large. The lower bounds of the penalty parameter and convergence time are also estimated. In addition, any neural state of the proposed neural network is convergent to its equilibrium point set which satisfies the Karush-Kuhn-Tucker conditions of the optimization problem. Moreover, the equilibrium point set is equivalent to the optimal solution to the nonconvex optimization problem if the objective function and constraints satisfy given conditions. Four numerical examples are provided to illustrate the performances of the proposed neural network.
Noise and time delay induce critical point in a bistable system
Zhang, Jianqiang; Nie, Linru; Yu, Lilong; Zhang, Xinyu
2014-07-01
We study relaxation time Tc of time-delayed bistable system driven by two cross-correlated Gaussian white noises that one is multiplicative and the other is additive. By means of numerical calculations, the results indicate that: (i) Combination of noise and time delay can induce two critical points about the relaxation time at some certain noise cross-correlation strength λ under the condition that the multiplicative intensity D equals to the additive noise intensity α. (ii) For each fixed D or α, there are two symmetrical critical points which locates in the regions of positive and negative correlations, respectively. Namely, as λ equals to the critical value λc, Tc is independent of the delay time and the result of Tc versus τ is a horizontal line, but as |λ|>|λc| (or |λ|decreases) with the delay time increasing. (iii) In the presence of D = α, the change of λc with D is two symmetrical curves about the axis of λc = 0, and the critical value λc is close to zero for a smaller D, which approaches to +1 or -1 for a greater D.
Time Optimized Algorithm for Web Document Presentation Adaptation
DEFF Research Database (Denmark)
Pan, Rong; Dolog, Peter
2010-01-01
Currently information on the web is accessed through different devices. Each device has its own properties such as resolution, size, and capabilities to display information in different format and so on. This calls for adaptation of information presentation for such platforms. This paper proposes...... content-optimized and time-optimized algorithms for information presentation adaptation for different devices based on its hierarchical model. The model is formalized in order to experiment with different algorithms.......Currently information on the web is accessed through different devices. Each device has its own properties such as resolution, size, and capabilities to display information in different format and so on. This calls for adaptation of information presentation for such platforms. This paper proposes...
Directory of Open Access Journals (Sweden)
Doo Ho Lee
Full Text Available This work studies the optimal pricing strategy in a discrete-time Geo/Geo/1 queuing system under the sojourn time-dependent reward. We consider two types of pricing schemes. The first one is called the ex-post payment scheme where the server charges a price that is proportional to the time a customer spends in the system, and the second one is called ex-ante payment scheme where the server charges a flat price for all services. In each pricing scheme, a departing customer receives the reward that is inversely proportional to his/her sojourn time. The server should make the optimal pricing decisions in order to maximize its expected profits per time unit in each pricing scheme. This work also investigates customer's equilibrium joining or balking behavior under server's optimal pricing strategy. Numerical experiments are also conducted to validate our analysis. Keywords: Optimal pricing, Equilibrium behavior, Geo/Geo/1 queue, Sojourn time-dependent reward
Joint optimization of LORA and spares stocks considering corrective maintenance time
Institute of Scientific and Technical Information of China (English)
Linhan Guo; Jiujiu Fan; Meilin Wen; Rui Kang
2015-01-01
Level of repair analysis (LORA) is an important method of maintenance decision for establishing systems of operation and maintenance in the equipment development period. Currently, the research on equipment of repair level focuses on economic analy-sis models which are used to optimize costs and rarely considers the maintenance time required by the implementation of the main-tenance program. In fact, as to the system requiring high mission complete success, the maintenance time is an important factor which has a great influence on the availability of equipment sys-tems. Considering the relationship between the maintenance time and the spares stocks level, it is obvious that there are contra-dictions between the maintenance time and the cost. In order to balance these two factors, it is necessary to build an optimization LORA model. To this end, the maintenance time representing per-formance characteristic is introduced, and on the basis of spares stocks which is traditional y regarded as a decision variable, a de-cision variable of repair level is added, and a multi-echelon multi-indenture (MEMI) optimization LORA model is built which takes the best cost-effectiveness ratio as the criterion, the expected num-ber of backorder (EBO) as the objective function and the cost as the constraint. Besides, the paper designs a convex programming algorithm of multi-variable for the optimization model, provides solutions to the non-convex objective function and methods for improving the efficiency of the algorithm. The method provided in this paper is proved to be credible and effective according to the numerical example and the simulation result.
Gill, Sharlene; Sargent, Daniel
2006-06-01
The intent of adjuvant therapy is to eradicate micro-metastatic residual disease following curative resection with the goal of preventing or delaying recurrence. The time-honored standard for demonstrating efficacy of new adjuvant therapies is an improvement in overall survival (OS). This typically requires phase III trials of large sample size with lengthy follow-up. With the intent of reducing the cost and time of completing such trials, there is considerable interest in developing alternative or surrogate end points. A surrogate end point may be employed as a substitute to directly assess the effects of an intervention on an already accepted clinical end point such as mortality. When used judiciously, surrogate end points can accelerate the evaluation of new therapies, resulting in the more timely dissemination of effective therapies to patients. The current review provides a perspective on the suitability and validity of disease-free survival (DFS) as an alternative end point for OS. Criteria for establishing surrogacy and the advantages and limitations associated with the use of DFS as a primary end point in adjuvant clinical trials and as the basis for approval of new adjuvant therapies are discussed.
Optimal moving grids for time-dependent partial differential equations
Wathen, A. J.
1992-01-01
Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of PDE solutions in the least-squares norm are reported.
DEFF Research Database (Denmark)
Lacevic, N.; Starr, F. W.; Schrøder, Thomas
2003-01-01
correlation function g4(r,t) and corresponding "structure factor" S4(q,t) which measure the spatial correlations between the local liquid density at two points in space, each at two different times, and so are sensitive to dynamical heterogeneity. We study g4(r,t) and S4(q,t) via molecular dynamics......Relaxation in supercooled liquids above their glass transition and below the onset temperature of "slow" dynamics involves the correlated motion of neighboring particles. This correlated motion results in the appearance of spatially heterogeneous dynamics or "dynamical heterogeneity." Traditional...... two-point time-dependent density correlation functions, while providing information about the transient "caging" of particles on cooling, are unable to provide sufficiently detailed information about correlated motion and dynamical heterogeneity. Here, we study a four-point, time-dependent density...
Point-and-stare operation and high-speed image acquisition in real-time hyperspectral imaging
Driver, Richard D.; Bannon, David P.; Ciccone, Domenic; Hill, Sam L.
2010-04-01
The design and optical performance of a small-footprint, low-power, turnkey, Point-And-Stare hyperspectral analyzer, capable of fully automated field deployment in remote and harsh environments, is described. The unit is packaged for outdoor operation in an IP56 protected air-conditioned enclosure and includes a mechanically ruggedized fully reflective, aberration-corrected hyperspectral VNIR (400-1000 nm) spectrometer with a board-level detector optimized for point and stare operation, an on-board computer capable of full system data-acquisition and control, and a fully functioning internal hyperspectral calibration system for in-situ system spectral calibration and verification. Performance data on the unit under extremes of real-time survey operation and high spatial and high spectral resolution will be discussed. Hyperspectral acquisition including full parameter tracking is achieved by the addition of a fiber-optic based downwelling spectral channel for solar illumination tracking during hyperspectral acquisition and the use of other sensors for spatial and directional tracking to pinpoint view location. The system is mounted on a Pan-And-Tilt device, automatically controlled from the analyzer's on-board computer, making the HyperspecTM particularly adaptable for base security, border protection and remote deployments. A hyperspectral macro library has been developed to control hyperspectral image acquisition, system calibration and scene location control. The software allows the system to be operated in a fully automatic mode or under direct operator control through a GigE interface.
3D Pattern Synthesis of Time-Modulated Conformal Arrays with a Multiobjective Optimization Approach
Directory of Open Access Journals (Sweden)
Wentao Li
2014-01-01
Full Text Available This paper addresses the synthesis of the three-dimensional (3D radiation patterns of the time-modulated conformal arrays. Due to the nature of periodic time modulation, harmonic radiation patterns are generated at the multiples of the modulation frequency in time-modulated arrays. Thus, the optimization goal of the time-modulated conformal array includes the optimization of the sidelobe level at the operating frequency and the sideband levels (SBLs at the harmonic frequency, and the design can be regarded as a multiobjective problem. The multiobjective particle swarm optimization (MOPSO is applied to optimize the switch-on instants and pulse durations of the time-modulated conformal array. To significantly reduce the optimization variables, the modified Bernstein polynomial is employed in the synthesis process. Furthermore, dual polarized patch antenna is designed as radiator to achieve low cross-polarization level during the beam scanning. A 12 × 13 (156-element conical conformal microstrip array is simulated to demonstrate the proposed synthesis mechanism, and good results reveal the promising ability of the proposed algorithm in solving the synthesis of the time-modulated conformal arrays problem.
Directory of Open Access Journals (Sweden)
Trine Krogh-Madsen
2017-12-01
Full Text Available In silico cardiac myocyte models present powerful tools for drug safety testing and for predicting phenotypical consequences of ion channel mutations, but their accuracy is sometimes limited. For example, several models describing human ventricular electrophysiology perform poorly when simulating effects of long QT mutations. Model optimization represents one way of obtaining models with stronger predictive power. Using a recent human ventricular myocyte model, we demonstrate that model optimization to clinical long QT data, in conjunction with physiologically-based bounds on intracellular calcium and sodium concentrations, better constrains model parameters. To determine if the model optimized to congenital long QT data better predicts risk of drug-induced long QT arrhythmogenesis, in particular Torsades de Pointes risk, we tested the optimized model against a database of known arrhythmogenic and non-arrhythmogenic ion channel blockers. When doing so, the optimized model provided an improved risk assessment. In particular, we demonstrate an elimination of false-positive outcomes generated by the baseline model, in which simulations of non-torsadogenic drugs, in particular verapamil, predict action potential prolongation. Our results underscore the importance of currents beyond those directly impacted by a drug block in determining torsadogenic risk. Our study also highlights the need for rich data in cardiac myocyte model optimization and substantiates such optimization as a method to generate models with higher accuracy of predictions of drug-induced cardiotoxicity.
Entropy Based Test Point Evaluation and Selection Method for Analog Circuit Fault Diagnosis
Directory of Open Access Journals (Sweden)
Yuan Gao
2014-01-01
Full Text Available By simplifying tolerance problem and treating faulty voltages on different test points as independent variables, integer-coded table technique is proposed to simplify the test point selection process. Usually, simplifying tolerance problem may induce a wrong solution while the independence assumption will result in over conservative result. To address these problems, the tolerance problem is thoroughly considered in this paper, and dependency relationship between different test points is considered at the same time. A heuristic graph search method is proposed to facilitate the test point selection process. First, the information theoretic concept of entropy is used to evaluate the optimality of test point. The entropy is calculated by using the ambiguous sets and faulty voltage distribution, determined by component tolerance. Second, the selected optimal test point is used to expand current graph node by using dependence relationship between the test point and graph node. Simulated results indicate that the proposed method more accurately finds the optimal set of test points than other methods; therefore, it is a good solution to minimize the size of the test point set. To simplify and clarify the proposed method, only catastrophic and some specific parametric faults are discussed in this paper.
Directory of Open Access Journals (Sweden)
Juanjo Ugartemendia
2013-09-01
Full Text Available This paper presents a hydrogen powered hybrid solid oxide fuel cell-steam turbine (SOFC-ST system and studies its optimal operating conditions. This type of installation can be very appropriate to complement the intermittent generation of renewable energies, such as wind generation. A dynamic model of an alternative hybrid SOFC-ST configuration that is especially suited to work with hydrogen is developed. The proposed system recuperates the waste heat of the high temperature fuel cell, to feed a bottoming cycle (BC based on a steam turbine (ST. In order to optimize the behavior and performance of the system, a two-level control structure is proposed. Two controllers have been implemented for the stack temperature and fuel utilization factor. An upper supervisor generates optimal set-points in order to reach a maximal hydrogen efficiency. The simulation results obtained show that the proposed system allows one to reach high efficiencies at rated power levels.
Wang, Qingrui; Liu, Ruimin; Men, Cong; Guo, Lijia
2018-05-01
The genetic algorithm (GA) was combined with the Conversion of Land Use and its Effect at Small regional extent (CLUE-S) model to obtain an optimized land use pattern for controlling non-point source (NPS) pollution. The performance of the combination was evaluated. The effect of the optimized land use pattern on the NPS pollution control was estimated by the Soil and Water Assessment Tool (SWAT) model and an assistant map was drawn to support the land use plan for the future. The Xiangxi River watershed was selected as the study area. Two scenarios were used to simulate the land use change. Under the historical trend scenario (Markov chain prediction), the forest area decreased by 2035.06 ha, and was mainly converted into paddy and dryland area. In contrast, under the optimized scenario (genetic algorithm (GA) prediction), up to 3370 ha of dryland area was converted into forest area. Spatially, the conversion of paddy and dryland into forest occurred mainly in the northwest and southeast of the watershed, where the slope land occupied a large proportion. The organic and inorganic phosphorus loads decreased by 3.6% and 3.7%, respectively, in the optimized scenario compared to those in the historical trend scenario. GA showed a better performance in optimized land use prediction. A comparison of the land use patterns in 2010 under the real situation and in 2020 under the optimized situation showed that Shennongjia and Shuiyuesi should convert 1201.76 ha and 1115.33 ha of dryland into forest areas, respectively, which represented the greatest changes in all regions in the watershed. The results of this study indicated that GA and the CLUE-S model can be used to optimize the land use patterns in the future and that SWAT can be used to evaluate the effect of land use optimization on non-point source pollution control. These methods may provide support for land use plan of an area.
International Nuclear Information System (INIS)
Sugny, D.; Bomble, L.; Ribeyre, T.; Dulieu, O.; Desouter-Lecomte, M.
2009-01-01
Implementation of quantum controlled-NOT (CNOT) gates in realistic molecular systems is studied using stimulated Raman adiabatic passage (STIRAP) techniques optimized in the time domain by genetic algorithms or coupled with optimal control theory. In the first case, with an adiabatic solution (a series of STIRAP processes) as starting point, we optimize in the time domain different parameters of the pulses to obtain a high fidelity in two realistic cases under consideration. A two-qubit CNOT gate constructed from different assignments in rovibrational states is considered in diatomic (NaCs) or polyatomic (SCCl 2 ) molecules. The difficulty of encoding logical states in pure rotational states with STIRAP processes is illustrated. In such circumstances, the gate can be implemented by optimal control theory and the STIRAP sequence can then be used as an interesting trial field. We discuss the relative merits of the two methods for rovibrational computing (structure of the control field, duration of the control, and efficiency of the optimization).
Fixed point theory, variational analysis, and optimization
Al-Mezel, Saleh Abdullah R; Ansari, Qamrul Hasan
2015-01-01
""There is a real need for this book. It is useful for people who work in areas of nonlinear analysis, optimization theory, variational inequalities, and mathematical economics.""-Nan-Jing Huang, Sichuan University, Chengdu, People's Republic of China
A point-based rendering approach for real-time interaction on mobile devices
Institute of Scientific and Technical Information of China (English)
LIANG XiaoHui; ZHAO QinPing; HE ZhiYing; XIE Ke; LIU YuBo
2009-01-01
Mobile device is an Important interactive platform. Due to the limitation of computation, memory, display area and energy, how to realize the efficient and real-time interaction of 3D models based on mobile devices is an important research topic. Considering features of mobile devices, this paper adopts remote rendering mode and point models, and then, proposes a transmission and rendering approach that could interact in real time. First, improved simplification algorithm based on MLS and display resolution of mobile devices is proposed. Then, a hierarchy selection of point models and a QoS transmission control strategy are given based on interest area of operator, interest degree of object in the virtual environment and rendering error. They can save the energy consumption. Finally, the rendering and interaction of point models are completed on mobile devices. The experiments show that our method is efficient.
Optimal redundant systems for works with random processing time
International Nuclear Information System (INIS)
Chen, M.; Nakagawa, T.
2013-01-01
This paper studies the optimal redundant policies for a manufacturing system processing jobs with random working times. The redundant units of the parallel systems and standby systems are subject to stochastic failures during the continuous production process. First, a job consisting of only one work is considered for both redundant systems and the expected cost functions are obtained. Next, each redundant system with a random number of units is assumed for a single work. The expected cost functions and the optimal expected numbers of units are derived for redundant systems. Subsequently, the production processes of N tandem works are introduced for parallel and standby systems, and the expected cost functions are also summarized. Finally, the number of works is estimated by a Poisson distribution for the parallel and standby systems. Numerical examples are given to demonstrate the optimization problems of redundant systems
Accuracy of multi-point boundary crossing time analysis
Directory of Open Access Journals (Sweden)
J. Vogt
2011-12-01
Full Text Available Recent multi-spacecraft studies of solar wind discontinuity crossings using the timing (boundary plane triangulation method gave boundary parameter estimates that are significantly different from those of the well-established single-spacecraft minimum variance analysis (MVA technique. A large survey of directional discontinuities in Cluster data turned out to be particularly inconsistent in the sense that multi-point timing analyses did not identify any rotational discontinuities (RDs whereas the MVA results of the individual spacecraft suggested that RDs form the majority of events. To make multi-spacecraft studies of discontinuity crossings more conclusive, the present report addresses the accuracy of the timing approach to boundary parameter estimation. Our error analysis is based on the reciprocal vector formalism and takes into account uncertainties both in crossing times and in the spacecraft positions. A rigorous error estimation scheme is presented for the general case of correlated crossing time errors and arbitrary spacecraft configurations. Crossing time error covariances are determined through cross correlation analyses of the residuals. The principal influence of the spacecraft array geometry on the accuracy of the timing method is illustrated using error formulas for the simplified case of mutually uncorrelated and identical errors at different spacecraft. The full error analysis procedure is demonstrated for a solar wind discontinuity as observed by the Cluster FGM instrument.
Verhoeven, Ronald; Dalmau Codina, Ramon; Prats Menéndez, Xavier; de Gelder, Nico
2014-01-01
1 Abstract In this paper an initial implementation of a real - time aircraft trajectory optimization algorithm is presented . The aircraft trajectory for descent and approach is computed for minimum use of thrust and speed brake in support of a “green” continuous descent and approach flight operation, while complying with ATC time constraints for maintaining runway throughput and co...
Optimal model-free prediction from multivariate time series
Runge, Jakob; Donner, Reik V.; Kurths, Jürgen
2015-05-01
Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.
International Nuclear Information System (INIS)
Baxa, Jan; Vendiš, Tomáš; Moláček, Jiří; Štěpánková, Lucie; Flohr, Thomas; Schmidt, Bernhard; Korporaal, Johannes G.; Ferda, Jiří
2014-01-01
Purpose: To verify the technical feasibility of low contrast volume (40 mL) run-off CT angiography (run-off CTA) with the individual scan time optimization based on double-level test bolus technique. Materials and methods: A prospective study of 92 consecutive patients who underwent run-off CTA performed with 40 mL of contrast medium (injection rate of 6 mL/s) and optimized scan times on a second generation of dual-source CT. Individual optimized scan times were calculated from aortopopliteal transit times obtained on the basis of double-level test bolus technique – the single injection of 10 mL test bolus and dynamic acquisitions in two levels (abdominal aorta and popliteal arteries). Intraluminal attenuation (HU) was measured in 6 levels (aorta, iliac, femoral and popliteal arteries, middle and distal lower-legs) and subjective quality (3-point score) was assessed. Relations of image quality, test bolus parameters and arterial circulation involvement were analyzed. Results: High mean attenuation (HU) values (468; 437; 442; 440; 342; 274) and quality score in all monitored levels was achieved. In 91 patients (0.99) the sufficient diagnostic quality (score 1–2) in aorta, iliac and femoral arteries was determined. A total of 6 patients (0.07) were not evaluable in distal lower-legs. Only the weak indirect correlation of image quality and test-bolus parameters was proved in iliac, femoral and popliteal levels (r values: −0.263, −0.298 and −0.254). The statistically significant difference of the test-bolus parameters and image quality was proved in patients with occlusive and aneurysmal disease. Conclusion: We proved the technical feasibility and sufficient quality of run-off CTA with low volume of contrast medium and optimized scan time according to aortopopliteal transit time calculated from double-level test bolus
Optimization time synthesis of nucleotide labelled [γ-32P]-ATP
International Nuclear Information System (INIS)
Rahman, Wira Y; Sarmini, Endang; Herlina; Lubis, Hotman; Triyanto; Hambali
2013-01-01
Adenosine triphosphate-labelled with γ- 32 P([γ- 32 p]-ATP) has been widely used in the biotechnology research, usually as a tracer to study aspects of physiological and pathological processes. In order to support biotechnology research in Indonesia, a process for production of [γ- 32 P]-ATP with enzymatic reaction was used as precursors DL-glyceraldehydde 3-phosphate, Adenosine Diphosphate (ADP) and H 3 32 PO 4 , and enzyme glyceraldehid 3-phosphate dehydrogenase, 3-phosphoglyceryc phosphokinase and lactate dehydrogenase. Optimization of incubation time labeled nucleotide synthesis process is performed to find the optimum conditions, in terms of the most advantageous time in the synthesis process. With the success of the synthesis and optimization is done incubation time of synthesis labeled nucleotide, the result suggested can be used for producing [γ- 32 P] -ATP to support the provision of radiolabeled nucleotide for biotechnology research in Indonesia. (author)
A man in the loop trajectory optimization program (MILTOP)
Reinfields, J.
1974-01-01
An interactive trajectory optimization program is developed for use in initial fixing of launch configurations. The program is called MILTOP for Man-In-the-Loop-Trajectory Optimization-Program. The program is designed to facilitate quick look studies using man-machine decision combinations to reduce the time required to solve a given problem. MILTOP integrates the equations of motion of a point-mass in 3-Dimensions with drag as the only aerodynamic force present. Any point in time at which an integration step terminates, may be used as a decision-break-point, with complete user control over all variables and routines at this point. Automatic phases are provided for different modes of control: vertical rise, pitch-over, gravity turn, chi-freeze and control turn. Stage parameters are initialized from a separate routine so the user may fly as many stages as his problem demands. The MILTOP system uses both interactively on storage scope consoles, or in batch mode with numerical output on the live printer.
Expanded GDoF-optimality Regime of Treating Interference as Noise in the $M\\times 2$ X-Channel
Gherekhloo, Soheil; Chaaban, Anas; Sezgin, Aydin
2016-01-01
-TIN and 2-IC-TIN. While in the first variant the M× 2 X-channel is reduced to a point-to-point (P2P) channel, in the second variant, the setup is reduced to a two-user interference channel in which the receivers use TIN. The optimality of these two variants
Energy Technology Data Exchange (ETDEWEB)
Baisden, W.T., E-mail: t.baisden@gns.cri.nz [National Isotope Centre, GNS Science, P.O. Box 31312, Lower Hutt (New Zealand); Canessa, S. [National Isotope Centre, GNS Science, P.O. Box 31312, Lower Hutt (New Zealand)
2013-01-15
In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of {sup 14}C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of {approx}500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of {sup 14}C to determine residence times, by estimating the amount of 'bomb {sup 14}C' incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point {sup 14}C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C ('passive fraction'), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.
Bayesian inference for multivariate point processes observed at sparsely distributed times
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl; Møller, Jesper; Aukema, B.H.
We consider statistical and computational aspects of simulation-based Bayesian inference for a multivariate point process which is only observed at sparsely distributed times. For specicity we consider a particular data set which has earlier been analyzed by a discrete time model involving unknown...... normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared to discrete time processes in the setting of the present paper as well as other spatial-temporal situations. Keywords: Bark beetle, conditional intensity, forest entomology, Markov chain Monte Carlo...
Methods optimization for the first time core critical
International Nuclear Information System (INIS)
Yan Liang
2014-01-01
The PWR reactor core commissioning programs the content of the first critical reactor physics experiment, and describes thc physical test method. However, all the methods arc not exactly the same but efficient. This article aims to enhance the reactor for the first time in the process of critical safety, shorten the overall time of critical physical test for the first time, and improve the integrity of critical physical test data for the first time and accuracy, eventually to improve the operation of the plant economic benefit adopting sectional dilution, power feedback for Doppler point improvement of physical test methods, and so on. (author)
Changes in Optimism Are Associated with Changes in Health Over Time Among Older Adults
Chopik, William J.; Kim, Eric S.; Smith, Jacqui
2016-01-01
Little is known about how optimism differs by age and changes over time, particularly among older adults. Even less is known about how changes in optimism are related to changes in physical health. We examined age differences and longitudinal changes in optimism in 9,790 older adults over a four-year period. We found an inverted U-shaped pattern between optimism and age both cross-sectionally and longitudinally, such that optimism generally increased in older adults before decreasing. Increases in optimism over a four-year period were associated with improvements in self-rated health and fewer chronic illnesses over the same time frame. The findings from the current study are consistent with changes in emotion regulation strategies employed by older adults and age-related changes in well-being. PMID:27114753
Changes in Optimism Are Associated with Changes in Health Over Time Among Older Adults.
Chopik, William J; Kim, Eric S; Smith, Jacqui
2015-09-01
Little is known about how optimism differs by age and changes over time, particularly among older adults. Even less is known about how changes in optimism are related to changes in physical health. We examined age differences and longitudinal changes in optimism in 9,790 older adults over a four-year period. We found an inverted U-shaped pattern between optimism and age both cross-sectionally and longitudinally, such that optimism generally increased in older adults before decreasing. Increases in optimism over a four-year period were associated with improvements in self-rated health and fewer chronic illnesses over the same time frame. The findings from the current study are consistent with changes in emotion regulation strategies employed by older adults and age-related changes in well-being.
Design Optimization of Multi-Cluster Embedded Systems for Real-Time Applications
DEFF Research Database (Denmark)
Pop, Paul; Eles, Petru; Peng, Zebo
2004-01-01
We present an approach to design optimization of multi-cluster embedded systems consisting of time-triggered and event-triggered clusters, interconnected via gateways. In this paper, we address design problems which are characteristic to multi-clusters: partitioning of the system functionality...... into time-triggered and event-triggered domains, process mapping, and the optimization of parameters corresponding to the communication protocol. We present several heuristics for solving these problems. Our heuristics are able to find schedulable implementations under limited resources, achieving...... an efficient utilization of the system. The developed algorithms are evaluated using extensive experiments and a real-life example....
Design Optimization of Multi-Cluster Embedded Systems for Real-Time Applications
DEFF Research Database (Denmark)
Pop, Paul; Eles, Petru; Peng, Zebo
2006-01-01
We present an approach to design optimization of multi-cluster embedded systems consisting of time-triggered and event-triggered clusters, interconnected via gateways. In this paper, we address design problems which are characteristic to multi-clusters: partitioning of the system functionality...... into time-triggered and event-triggered domains, process mapping, and the optimization of parameters corresponding to the communication protocol. We present several heuristics for solving these problems. Our heuristics are able to find schedulable implementations under limited resources, achieving...... an efficient utilization of the system. The developed algorithms are evaluated using extensive experiments and a real-life example....
Setting the optimal type of equipment to be adopted and the optimal time to replace it
Albici, Mihaela
2009-01-01
The mathematical models of equipment’s wear and tear, and replacement theory aim at deciding on the purchase selection of a certain equipment type, the optimal exploitation time of the equipment, the time and ways to replace or repair it, or to ensure its spare parts, the equipment’s performance in the technical progress context, the opportunities to modernize it etc.
Dynamic ADMM for Real-time Optimal Power Flow: Preprint
Energy Technology Data Exchange (ETDEWEB)
Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2018-02-23
This paper considers distribution networks featuring distributed energy resources (DERs), and develops a dynamic optimization method to maximize given operational objectives in real time while adhering to relevant network constraints. The design of the dynamic algorithm is based on suitable linearizations of the AC power flow equations, and it leverages the so-called alternating direction method of multipliers (ADMM). The steps of the ADMM, however, are suitably modified to accommodate appropriate measurements from the distribution network and the DERs. With the aid of these measurements, the resultant algorithm can enforce given operational constraints in spite of inaccuracies in the representation of the AC power flows, and it avoids ubiquitous metering to gather the state of non-controllable resources. Optimality and convergence of the propose algorithm are established in terms of tracking of the solution of a convex surrogate of the AC optimal power flow problem.
Optimizing Completion Time and Energy Consumption in a Bidirectional Relay Network
DEFF Research Database (Denmark)
Liu, Huaping; Sun, Fan; Thai, Chan
2012-01-01
consumption required for multiple flows depends on the current channel realizations, transmission methods used and, notably, the relation between the data sizes of different source nodes. In this paper we investigate the shortest completion time and minimal energy consumption in a two-way relay wireless...... arises for the minimal required energy. While the requirement for minimal energy consumption is obvious, the shortest completion time is relevant when certain multi-node network needs to reserve the wireless medium in order to carry out the data exchange among its nodes. The completion time/energy...... network. The system applies optimal time multiplexing of several known transmission methods, including one-way relaying and wireless network coding (WNC). We show that when the relay applies Amplify-and-Forward (AF), both minimizations are linear optimization problems. On the other hand, when the relay...
Free terminal time optimal control problem for the treatment of HIV infection
Directory of Open Access Journals (Sweden)
Amine Hamdache
2016-01-01
to provide the explicit formulations of the optimal controls. The corresponding optimality system with the additional transversality condition for the terminal time is derived and solved numerically using an adapted iterative method with a Runge-Kutta fourth order scheme and a gradient method routine.
Van Oort, N.; Boterman, J.W.; Van Nes, R.
2012-01-01
This paper presents research on optimizing service reliability of longheadway services in urban public transport. Setting the driving time, and thus the departure time at stops, is an important decision when optimizing reliability in urban public transport. The choice of the percentile out of
Control strategy optimization of HVAC plants
Energy Technology Data Exchange (ETDEWEB)
Facci, Andrea Luigi; Zanfardino, Antonella [Department of Engineering, University of Napoli “Parthenope” (Italy); Martini, Fabrizio [Green Energy Plus srl (Italy); Pirozzi, Salvatore [SIAT Installazioni spa (Italy); Ubertini, Stefano [School of Engineering (DEIM) University of Tuscia (Italy)
2015-03-10
In this paper we present a methodology to optimize the operating conditions of heating, ventilation and air conditioning (HVAC) plants to achieve a higher energy efficiency in use. Semi-empiric numerical models of the plant components are used to predict their performances as a function of their set-point and the environmental and occupied space conditions. The optimization is performed through a graph-based algorithm that finds the set-points of the system components that minimize energy consumption and/or energy costs, while matching the user energy demands. The resulting model can be used with systems of almost any complexity, featuring both HVAC components and energy systems, and is sufficiently fast to make it applicable to real-time setting.
Control strategy optimization of HVAC plants
International Nuclear Information System (INIS)
Facci, Andrea Luigi; Zanfardino, Antonella; Martini, Fabrizio; Pirozzi, Salvatore; Ubertini, Stefano
2015-01-01
In this paper we present a methodology to optimize the operating conditions of heating, ventilation and air conditioning (HVAC) plants to achieve a higher energy efficiency in use. Semi-empiric numerical models of the plant components are used to predict their performances as a function of their set-point and the environmental and occupied space conditions. The optimization is performed through a graph-based algorithm that finds the set-points of the system components that minimize energy consumption and/or energy costs, while matching the user energy demands. The resulting model can be used with systems of almost any complexity, featuring both HVAC components and energy systems, and is sufficiently fast to make it applicable to real-time setting
A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations
Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.
Topology optimization for nano-photonics
DEFF Research Database (Denmark)
Jensen, Jakob Søndergaard; Sigmund, Ole
2011-01-01
Topology optimization is a computational tool that can be used for the systematic design of photonic crystals, waveguides, resonators, filters and plasmonics. The method was originally developed for mechanical design problems but has within the last six years been applied to a range of photonics...... applications. Topology optimization may be based on finite element and finite difference type modeling methods in both frequency and time domain. The basic idea is that the material density of each element or grid point is a design variable, hence the geometry is parameterized in a pixel-like fashion....... The optimization problem is efficiently solved using mathematical programming-based optimization methods and analytical gradient calculations. The paper reviews the basic procedures behind topology optimization, a large number of applications ranging from photonic crystal design to surface plasmonic devices...
Directory of Open Access Journals (Sweden)
Mina Ghanbarikarekani
2016-06-01
Full Text Available Optimization of signal timing in urban network is usually done by minimizing the delay times or queue lengths. Sincethe effect of each intersection on the whole network is not considered in the mentioned methods, traffic congestion may occur in network links. Therefore, this paper has aimed to provide a timing optimization algorithm for traffic signals using internal timing policy based on balancing queue time ratio of vehicles in network links. In the proposed algorithm, the difference between the real queue time ratio and the optimum one for each link of intersection was minimized. To evaluate the efficiency of the proposed algorithm on traffic performance, the proposed algorithm was applied in a hypothetical network. By comparing the simulating software outputs, before and after implementing the algorithm, it was concluded that the queue time ratio algorithm has improved the traffic parameters by increasing the flow as well as reducing the delay time and density of the network.
Empiric model for mean generation time adjustment factor for classic point kinetics equations
Energy Technology Data Exchange (ETDEWEB)
Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C., E-mail: david.goes@poli.ufrj.br, E-mail: aquilino@lmp.ufrj.br, E-mail: alessandro@con.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear
2017-11-01
Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)
Empiric model for mean generation time adjustment factor for classic point kinetics equations
International Nuclear Information System (INIS)
Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C.
2017-01-01
Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)
Free terminal time optimal control problem of an HIV model based on a conjugate gradient method.
Jang, Taesoo; Kwon, Hee-Dae; Lee, Jeehyun
2011-10-01
The minimum duration of treatment periods and the optimal multidrug therapy for human immunodeficiency virus (HIV) type 1 infection are considered. We formulate an optimal tracking problem, attempting to drive the states of the model to a "healthy" steady state in which the viral load is low and the immune response is strong. We study an optimal time frame as well as HIV therapeutic strategies by analyzing the free terminal time optimal tracking control problem. The minimum duration of treatment periods and the optimal multidrug therapy are found by solving the corresponding optimality systems with the additional transversality condition for the terminal time. We demonstrate by numerical simulations that the optimal dynamic multidrug therapy can lead to the long-term control of HIV by the strong immune response after discontinuation of therapy.
Optimal operation of smart houses by a real-time rolling horizon algorithm
Paterakis, N.G.; Pappi, I.N.; Catalão, J.P.S.; Erdinc, O.
2016-01-01
In this paper, a novel real-time rolling horizon optimization framework for the optimal operation of a smart household is presented. A home energy management system (HEMS) model based on mixed-integer linear programming (MILP) is developed in order to minimize the energy procurement cost considering
Optimizing some 3-stage W-methods for the time integration of PDEs
Gonzalez-Pinto, S.; Hernandez-Abreu, D.; Perez-Rodriguez, S.
2017-07-01
The optimization of some W-methods for the time integration of time-dependent PDEs in several spatial variables is considered. In [2, Theorem 1] several three-parametric families of three-stage W-methods for the integration of IVPs in ODEs were studied. Besides, the optimization of several specific methods for PDEs when the Approximate Matrix Factorization Splitting (AMF) is used to define the approximate Jacobian matrix (W ≈ fy(yn)) was carried out. Also, some convergence and stability properties were presented [2]. The derived methods were optimized on the base that the underlying explicit Runge-Kutta method is the one having the largest Monotonicity interval among the thee-stage order three Runge-Kutta methods [1]. Here, we propose an optimization of the methods by imposing some additional order condition [7] to keep order three for parabolic PDE problems [6] but at the price of reducing substantially the length of the nonlinear Monotonicity interval of the underlying explicit Runge-Kutta method.
Optimizing Real-Time Vaccine Allocation in a Stochastic SIR Model.
Directory of Open Access Journals (Sweden)
Chantal Nguyen
Full Text Available Real-time vaccination following an outbreak can effectively mitigate the damage caused by an infectious disease. However, in many cases, available resources are insufficient to vaccinate the entire at-risk population, logistics result in delayed vaccine deployment, and the interaction between members of different cities facilitates a wide spatial spread of infection. Limited vaccine, time delays, and interaction (or coupling of cities lead to tradeoffs that impact the overall magnitude of the epidemic. These tradeoffs mandate investigation of optimal strategies that minimize the severity of the epidemic by prioritizing allocation of vaccine to specific subpopulations. We use an SIR model to describe the disease dynamics of an epidemic which breaks out in one city and spreads to another. We solve a master equation to determine the resulting probability distribution of the final epidemic size. We then identify tradeoffs between vaccine, time delay, and coupling, and we determine the optimal vaccination protocols resulting from these tradeoffs.
International Nuclear Information System (INIS)
Feng Guangwen; Hu Youhua; Liu Qian
2009-01-01
In this paper, the application of the entropy weight TOPSIS method to optimal layout points in monitoring the Xinjiang radiation environment has been indroduced. With the help of SAS software, It has been found that the method is more ideal and feasible. The method can provide a reference for us to monitor radiation environment in the same regions further. As the method could bring great convenience and greatly reduce the inspecting work, it is very simple, flexible and effective for a comprehensive evaluation. (authors)
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We
Directory of Open Access Journals (Sweden)
Joao CARDOSO NETO
2012-01-01
Full Text Available Chile is a country with great attractions for tourists in South America and the whole world. Among the many tourist Chilean attractions the city of Vina del Mar is one of the highlights, recognized nationally and internationally as one of the most beautiful places for summer. In Vina del Mar tourists have many options for leisure, besides pretty beaches, e.g. playa renaca, the city has beautiful squares and castles, e.g. Castillo Wulff built more than 100 (one hundred years ago. It is noteworthy that already exist over there five (5 tourist itineraries, so this work was developed in order to determine the best routes to these existing itineraries, and create a unique route that includes all the tourist points in Vina del Mar, because in this way, the tourists visiting this city can minimize the time spent in traveling, as well as optimize their moments of leisure, taking the opportunity to know all the city attractions. To determine shorter ways to do it and then propose some suggestions for improvement of the quality of the tourist service offered, it had used the exact method, by solving the mathematical model of the TSP (Traveling Salesman Problem, and the heuristic method, using the most economic insertion algorithm.
Ahmet Demir; Utku kose
2017-01-01
In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an...
Shortest path problem on a grid network with unordered intermediate points
Saw, Veekeong; Rahman, Amirah; Eng Ong, Wen
2017-10-01
We consider a shortest path problem with single cost factor on a grid network with unordered intermediate points. A two stage heuristic algorithm is proposed to find a feasible solution path within a reasonable amount of time. To evaluate the performance of the proposed algorithm, computational experiments are performed on grid maps of varying size and number of intermediate points. Preliminary results for the problem are reported. Numerical comparisons against brute forcing show that the proposed algorithm consistently yields solutions that are within 10% of the optimal solution and uses significantly less computation time.
Nazemizadeh, M.; Rahimi, H. N.; Amini Khoiy, K.
2012-03-01
This paper presents an optimal control strategy for optimal trajectory planning of mobile robots by considering nonlinear dynamic model and nonholonomic constraints of the system. The nonholonomic constraints of the system are introduced by a nonintegrable set of differential equations which represent kinematic restriction on the motion. The Lagrange's principle is employed to derive the nonlinear equations of the system. Then, the optimal path planning of the mobile robot is formulated as an optimal control problem. To set up the problem, the nonlinear equations of the system are assumed as constraints, and a minimum energy objective function is defined. To solve the problem, an indirect solution of the optimal control method is employed, and conditions of the optimality derived as a set of coupled nonlinear differential equations. The optimality equations are solved numerically, and various simulations are performed for a nonholonomic mobile robot to illustrate effectiveness of the proposed method.
Optimal distribution of integration time for intensity measurements in Stokes polarimetry.
Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng
2015-10-19
We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.
Real-Time Demand Side Management Algorithm Using Stochastic Optimization
Directory of Open Access Journals (Sweden)
Moses Amoasi Acquah
2018-05-01
Full Text Available A demand side management technique is deployed along with battery energy-storage systems (BESS to lower the electricity cost by mitigating the peak load of a building. Most of the existing methods rely on manual operation of the BESS, or even an elaborate building energy-management system resorting to a deterministic method that is susceptible to unforeseen growth in demand. In this study, we propose a real-time optimal operating strategy for BESS based on density demand forecast and stochastic optimization. This method takes into consideration uncertainties in demand when accounting for an optimal BESS schedule, making it robust compared to the deterministic case. The proposed method is verified and tested against existing algorithms. Data obtained from a real site in South Korea is used for verification and testing. The results show that the proposed method is effective, even for the cases where the forecasted demand deviates from the observed demand.
Exact Identification of a Quantum Change Point
Sentís, Gael; Calsamiglia, John; Muñoz-Tapia, Ramon
2017-10-01
The detection of change points is a pivotal task in statistical analysis. In the quantum realm, it is a new primitive where one aims at identifying the point where a source that supposedly prepares a sequence of particles in identical quantum states starts preparing a mutated one. We obtain the optimal procedure to identify the change point with certainty—naturally at the price of having a certain probability of getting an inconclusive answer. We obtain the analytical form of the optimal probability of successful identification for any length of the particle sequence. We show that the conditional success probabilities of identifying each possible change point show an unexpected oscillatory behavior. We also discuss local (online) protocols and compare them with the optimal procedure.
Schmuck, Sebastian; Mamach, Martin; Wilke, Florian; von Klot, Christoph A; Henkenberens, Christoph; Thackeray, James T; Sohns, Jan M; Geworski, Lilli; Ross, Tobias L; Wester, Hans-Juergen; Christiansen, Hans; Bengel, Frank M; Derlin, Thorsten
2017-06-01
The aims of this study were to gain mechanistic insights into prostate cancer biology using dynamic imaging and to evaluate the usefulness of multiple time-point Ga-prostate-specific membrane antigen (PSMA) I&T PET/CT for the assessment of primary prostate cancer before prostatectomy. Twenty patients with prostate cancer underwent Ga-PSMA I&T PET/CT before prostatectomy. The PET protocol consisted of early dynamic pelvic imaging, followed by static scans at 60 and 180 minutes postinjection (p.i.). SUVs, time-activity curves, quantitative analysis based on a 2-tissue compartment model, Patlak analysis, histopathology, and Gleason grading were compared between prostate cancer and benign prostate gland. Primary tumors were identified on both early dynamic and delayed imaging in 95% of patients. Tracer uptake was significantly higher in prostate cancer compared with benign prostate tissue at any time point (P ≤ 0.0003) and increased over time. Consequently, the tumor-to-nontumor ratio within the prostate gland improved over time (2.8 at 10 minutes vs 17.1 at 180 minutes p.i.). Tracer uptake at both 60 and 180 minutes p.i. was significantly higher in patients with higher Gleason scores (P dynamic and static delayed Ga-PSMA ligand PET images. The tumor-to-nontumor ratio in the prostate gland improves over time, supporting a role of delayed imaging for optimal visualization of prostate cancer.
Directory of Open Access Journals (Sweden)
Rumiko Tashima
Full Text Available The Ki-67 index is an important biomarker for indicating the proliferation of cancer cells and is considered to be an effective prognostic factor for breast cancer. However, a standard cut-off point for the Ki-67 index has not yet been established. Therefore, the aim of this retrospective study was to determine an optimal cut-off point in order to establish it as a more accurate prognostic factor. Immunohistochemical analysis of the Ki-67 index was performed on 4329 patients with primary breast cancer from August 1987 to March 2012. Out of this sample, there were 3186 consecutive cases from September 1997 with simultaneous evaluations of ER, PgR and HER2 status. Cox's proportional hazard model was used to perform univariate and multivariate analyses of the factors related to OS. The hazard ratios (HR and the p values were then compared to determine the optimal cut-off point for the Ki-67 index. The median Ki-67 index value was 20.5% (mean value 26.2%. The univariate analysis revealed that there was a statistically significant negative correlation with DFS and OS and the multivariate analysis revealed that the Ki-67 index value was a significant factor for DFS and OS. The top seven cut-off points were then carefully chosen based on the results of the univariate analysis using the lowest p-values and the highest HR as the main selection criteria. The multivariate analysis of the factors for OS showed that the cut-off point of 20% had the highest HR in all of the cases. However, the cutoff point of 20% was only a significant factor for OS in the Luminal/HER2- subtype. There was no correlation between the Ki-67 index value and OS in any of the other subtypes. These data indicate that the optimal cut-off point of 20% is the most effective prognostic factor for Luminal/HER2- breast cancer.
Optimal Cotton Insecticide Application Termination Timing: A Meta-Analysis.
Griffin, T W; Zapata, S D
2016-08-01
The concept of insecticide termination timing is generally accepted among cotton (Gossypium hirsutum) researchers; however, exact timings are often disputed. Specifically, there is uncertainty regarding the last economic insecticide application to control fruit-feeding pests including tarnished plant bug (Lygus lineolaris (Palisot de Beauvois)), boll weevil (Anthonomus grandis), bollworm (Helicoverpa zea), tobacco budworm (Heliothis virescens), and cotton fleahopper (Pseudatomoscelis seriatus). A systematic review of prior studies was conducted within a meta-analytic framework. Nine publicly available articles were amalgamated to develop an optimal timing principle. These prior studies reported 53 independent multiple means comparison field experiments for a total of 247 trial observations. Stochastic plateau theory integrated with econometric meta-analysis methodology was applied to the meta-database to determine the shape of the functional form of both the agronomic optimal insecticide termination timing and corresponding yield potential. Results indicated that current university insecticide termination timing recommendations are later than overall estimated timing suggested. The estimated 159 heat units (HU) after the fifth position above white flower (NAWF5) was found to be statistically different than the 194 HU termination used as the status quo recommended termination timing. Insecticides applied after 159 HU may have been applied in excess, resulting in unnecessary economic and environmental costs. Empirical results also suggested that extending the insecticide termination time by one unit resulted in a cotton lint yield increase of 0.27 kilograms per hectare up to the timing where the plateau began. Based on economic analyses, profit-maximizing producers may cease application as soon as 124 HU after NAWF5. These results provided insights useful to improve production systems by applying inputs only when benefits were expected to be in excess of the
Variable-Field Analytical Ultracentrifugation: I. Time-Optimized Sedimentation Equilibrium
Ma, Jia; Metrick, Michael; Ghirlando, Rodolfo; Zhao, Huaying; Schuck, Peter
2015-01-01
Sedimentation equilibrium (SE) analytical ultracentrifugation (AUC) is a gold standard for the rigorous determination of macromolecular buoyant molar masses and the thermodynamic study of reversible interactions in solution. A significant experimental drawback is the long time required to attain SE, which is usually on the order of days. We have developed a method for time-optimized SE (toSE) with defined time-varying centrifugal fields that allow SE to be attained in a significantly (up to 10-fold) shorter time than is usually required. To achieve this, numerical Lamm equation solutions for sedimentation in time-varying fields are computed based on initial estimates of macromolecular transport properties. A parameterized rotor-speed schedule is optimized with the goal of achieving a minimal time to equilibrium while limiting transient sample preconcentration at the base of the solution column. The resulting rotor-speed schedule may include multiple over- and underspeeding phases, balancing the formation of gradients from strong sedimentation fluxes with periods of high diffusional transport. The computation is carried out in a new software program called TOSE, which also facilitates convenient experimental implementation. Further, we extend AUC data analysis to sedimentation processes in such time-varying centrifugal fields. Due to the initially high centrifugal fields in toSE and the resulting strong migration, it is possible to extract sedimentation coefficient distributions from the early data. This can provide better estimates of the size of macromolecular complexes and report on sample homogeneity early on, which may be used to further refine the prediction of the rotor-speed schedule. In this manner, the toSE experiment can be adapted in real time to the system under study, maximizing both the information content and the time efficiency of SE experiments. PMID:26287634
Directory of Open Access Journals (Sweden)
Mingjian Sun
2015-01-01
Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.
Efficient Algorithms for Segmentation of Item-Set Time Series
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
3D interactive topology optimization on hand-held devices
DEFF Research Database (Denmark)
Nobel-Jørgensen, Morten; Aage, Niels; Christiansen, Asger Nyman
2015-01-01
This educational paper describes the implementation aspects, user interface design considerations and workflow potential of the recently published TopOpt 3D App. The app solves the standard minimum compliance problem in 3D and allows the user to change design settings interactively at any point...... in time during the optimization. Apart from its educational nature, the app may point towards future ways of performing industrial design. Instead of the usual geometrize, then model and optimize approach, the geometry now automatically adapts to the varying boundary and loading conditions. The app...
Reliability-Based Optimization of Structural Elements
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard
In this paper structural elements from an optimization point of view are considered, i.e. only the geometry of a structural element is optimized. Reliability modelling of the structural element is discussed both from an element point of view and from a system point of view. The optimization...
Design Optimization of Time- and Cost-Constrained Fault-Tolerant Distributed Embedded Systems
DEFF Research Database (Denmark)
Izosimov, Viacheslav; Pop, Paul; Eles, Petru
2005-01-01
In this paper we present an approach to the design optimization of fault-tolerant embedded systems for safety-critical applications. Processes are statically scheduled and communications are performed using the time-triggered protocol. We use process re-execution and replication for tolerating...... transient faults. Our design optimization approach decides the mapping of processes to processors and the assignment of fault-tolerant policies to processes such that transient faults are tolerated and the timing constraints of the application are satisfied. We present several heuristics which are able...
The use of linear programming in optimization of HDR implant dose distributions
International Nuclear Information System (INIS)
Jozsef, Gabor; Streeter, Oscar E.; Astrahan, Melvin A.
2003-01-01
The introduction of high dose rate brachytherapy enabled optimization of dose distributions to be used on a routine basis. The objective of optimization is to homogenize the dose distribution within the implant while simultaneously satisfying dose constraints on certain points. This is accomplished by varying the time the source dwells at different locations. As the dose at any point is a linear function of the dwell times, a linear programming approach seems to be a natural choice. The dose constraints are inherently linear inequalities. Homogeneity requirements are linearized by minimizing the maximum deviation of the doses at points inside the implant from a prescribed dose. The revised simplex method was applied for the solution of this linear programming problem. In the homogenization process the possible source locations were chosen as optimization points. To avoid the problem of the singular value of the dose at a source location from the source itself we define the 'self-contribution' as the dose at a small distance from the source. The effect of varying this distance is discussed. Test cases were optimized for planar, biplanar and cylindrical implants. A semi-irregular, fan-like implant with diverging needles was also investigated. Mean central dose calculation based on 3D Delaunay-triangulation of the source locations was used to evaluate the dose distributions. The optimization method resulted in homogeneous distributions (for brachytherapy). Additional dose constraints--when applied--were satisfied. The method is flexible enough to include other linear constraints such as the inclusion of the centroids of the Delaunay-triangulation for homogenization, or limiting the maximum allowable dwell time
OPTIMIZING TIME WINDOWS FOR MANAGING ARRIVALS OF EXPORT CONTAINERS AT CHINESE CONTAINER TERMINALS
DEFF Research Database (Denmark)
Chen, Gang; Yang, Zhongzhen
2009-01-01
of driver and truck waiting time, the cost of container cargo storage time, the truck idle cost and terminal yard fee. Secondly, to minimize the costs, a heuristic is developed based on a genetic algorithm to optimize the time window arrangement. The optimal solution involves the position and the length...... window management programme that is widely used in Chinese terminals to facilitate the terminal operations and the truck delivery operations. Firstly, the arrangement of time windows is assumed to follow the principle of minimizing the transport costs. A cost function is defined that includes the cost...
Cool down time optimization of the Stirling cooler
Xia, M.; Chen, X. P.; Y Li, H.; Gan, Z. H.
2017-12-01
The cooling power is one of the most important performances of a Stirling cooler. However, in some special fields, the cool down time is more important. It is a great challenge to improve the cool down time of the Stirling cooler. A new split Stirling linear cryogenic cooler SCI09H was designed in this study. A new structure of linear motor is used in the compressor, and the machine spring is used in the expander. In order to reduce the cool down time, the stainless-steel mesh of regenerator is optimized. The weight of the cooler is 1.1 kg, the cool down time to 80K is 2 minutes at 296K with a 250J thermal mass, the cooling power is 1.1W at 80K, and the input power is 50W.
Novel technique for prediction of time points for scheduling of multipurpose batch plants
CSIR Research Space (South Africa)
Seid, R
2012-01-01
Full Text Available . Consequently this avoids costly computational times due to iterations. In the model by Majozi and Zhu (2001) the sequence constraint that pertains to tasks that consume and produce the same state, the starting time of the consuming task at time point p must...
Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.
Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman
2017-10-18
Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.
Li, Bingyi; Chen, Liang; Yu, Wenyue; Xie, Yizhuang; Bian, Mingming; Zhang, Qingjun; Pang, Long
2018-01-01
With the development of satellite load technology and very large-scale integrated (VLSI) circuit technology, on-board real-time synthetic aperture radar (SAR) imaging systems have facilitated rapid response to disasters. A key goal of the on-board SAR imaging system design is to achieve high real-time processing performance under severe size, weight, and power consumption constraints. This paper presents a multi-node prototype system for real-time SAR imaging processing. We decompose the commonly used chirp scaling (CS) SAR imaging algorithm into two parts according to the computing features. The linearization and logic-memory optimum allocation methods are adopted to realize the nonlinear part in a reconfigurable structure, and the two-part bandwidth balance method is used to realize the linear part. Thus, float-point SAR imaging processing can be integrated into a single Field Programmable Gate Array (FPGA) chip instead of relying on distributed technologies. A single-processing node requires 10.6 s and consumes 17 W to focus on 25-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. The design methodology of the multi-FPGA parallel accelerating system under the real-time principle is introduced. As a proof of concept, a prototype with four processing nodes and one master node is implemented using a Xilinx xc6vlx315t FPGA. The weight and volume of one single machine are 10 kg and 32 cm × 24 cm × 20 cm, respectively, and the power consumption is under 100 W. The real-time performance of the proposed design is demonstrated on Chinese Gaofen-3 stripmap continuous imaging. PMID:29495637
Springback effects during single point incremental forming: Optimization of the tool path
Giraud-Moreau, Laurence; Belchior, Jérémy; Lafon, Pascal; Lotoing, Lionel; Cherouat, Abel; Courtielle, Eric; Guines, Dominique; Maurine, Patrick
2018-05-01
Incremental sheet forming is an emerging process to manufacture sheet metal parts. This process is more flexible than conventional one and well suited for small batch production or prototyping. During the process, the sheet metal blank is clamped by a blank-holder and a small-size smooth-end hemispherical tool moves along a user-specified path to deform the sheet incrementally. Classical three-axis CNC milling machines, dedicated structure or serial robots can be used to perform the forming operation. Whatever the considered machine, large deviations between the theoretical shape and the real shape can be observed after the part unclamping. These deviations are due to both the lack of stiffness of the machine and residual stresses in the part at the end of the forming stage. In this paper, an optimization strategy of the tool path is proposed in order to minimize the elastic springback induced by residual stresses after unclamping. A finite element model of the SPIF process allowing the shape prediction of the formed part with a good accuracy is defined. This model, based on appropriated assumptions, leads to calculation times which remain compatible with an optimization procedure. The proposed optimization method is based on an iterative correction of the tool path. The efficiency of the method is shown by an improvement of the final shape.
Sankar Sana, Shib
2016-01-01
The paper develops a production-inventory model of a two-stage supply chain consisting of one manufacturer and one retailer to study production lot size/order quantity, reorder point sales teams' initiatives where demand of the end customers is dependent on random variable and sales teams' initiatives simultaneously. The manufacturer produces the order quantity of the retailer at one lot in which the procurement cost per unit quantity follows a realistic convex function of production lot size. In the chain, the cost of sales team's initiatives/promotion efforts and wholesale price of the manufacturer are negotiated at the points such that their optimum profits reached nearer to their target profits. This study suggests to the management of firms to determine the optimal order quantity/production quantity, reorder point and sales teams' initiatives/promotional effort in order to achieve their maximum profits. An analytical method is applied to determine the optimal values of the decision variables. Finally, numerical examples with its graphical presentation and sensitivity analysis of the key parameters are presented to illustrate more insights of the model.
Directory of Open Access Journals (Sweden)
Jason Jiunshiou Lee
Full Text Available Adolescent obesity has increased to alarming proportions globally. However, few studies have investigated the optimal waist circumference (WC of Asian adolescents. This study sought to establish the optimal WC cutoff points that identify a cluster of cardiovascular risk factors (CVRFs among 15-year-old ethnically Chinese adolescents. This study was a regional population-based study on the CVRFs among adolescents who enrolled in all the senior high schools in Taipei City, Taiwan, between 2011 and 2014. Four cross-sectional health examinations of first-year senior high school (grade 10 students were conducted from September to December of each year. A total of 124,643 adolescents aged 15 (boys: 63,654; girls: 60,989 were recruited. Participants who had at least three of five CVRFs were classified as the high-risk group. We used receiver-operating characteristic curves and the area under the curve (AUC to determine the optimal WC cutoff points and the accuracy of WC in predicting high cardiovascular risk. WC was a good predictor for high cardiovascular risk for both boys (AUC: 0.845, 95% confidence interval [CI]: 0.833-0.857 and girls (AUC: 0.763, 95% CI: 0.731-0.795. The optimal WC cutoff points were ≥78.9 cm for boys (77th percentile and ≥70.7 cm for girls (77th percentile. Adolescents with normal weight and an abnormal WC were more likely to be in the high cardiovascular risk group (odds ratio: 3.70, 95% CI: 2.65-5.17 compared to their peers with normal weight and normal WC. The optimal WC cutoff point of 15-year-old Taiwanese adolescents for identifying CVRFs should be the 77th percentile; the 90th percentile of the WC might be inadequate. The high WC criteria can help health professionals identify higher proportion of the adolescents with cardiovascular risks and refer them for further evaluations and interventions. Adolescents' height, weight and WC should be measured as a standard practice in routine health checkups.
Allen, G. H.; David, C. H.; Andreadis, K. M.; Emery, C. M.; Famiglietti, J. S.
2017-12-01
Earth observing satellites provide valuable near real-time (NRT) information about flood occurrence and magnitude worldwide. This NRT information can be used in early flood warning systems and other flood management applications to save lives and mitigate flood damage. However, these NRT products are only useful to early flood warning systems if they are quickly made available, with sufficient time for flood mitigation actions to be implemented. More specifically, NRT data latency, or the time period between the satellite observation and when the user has access to the information, must be less than the time it takes a flood to travel from the flood observation location to a given downstream point of interest. Yet the paradigm that "lower latency is always better" may not necessarily hold true in river systems due to tradeoffs between data latency and data quality. Further, the existence of statistical breaks in the global distribution of flood wave travel time (i.e. a jagged statistical distribution) would represent preferable latencies for river-observation NRT remote sensing products. Here we present a global analysis of flood wave velocity (i.e. flow celerity) and travel time. We apply a simple kinematic wave model to a global hydrography dataset and calculate flow wave celerity and travel time during bankfull flow conditions. Bankfull flow corresponds to the condition of maximum celerity and thus we present the "worst-case scenario" minimum flow wave travel time. We conduct a similar analysis with respect to the time it takes flood waves to reach the next downstream city, as well as the next downstream reservoir. Finally, we conduct these same analyses, but with regards to the technical capabilities of the planned Surface Water and Ocean Topography (SWOT) satellite mission, which is anticipated to provide waterbody elevation and extent measurements at an unprecedented spatial and temporal resolution. We validate these results with discharge records from paired
Energy Technology Data Exchange (ETDEWEB)
Aussagues, Ch
1998-12-11
This PhD thesis is dealing with the correct design of safety-critical real-time parallel systems. Such systems constitutes a fundamental part of high-performance systems for command and control that can be found in the nuclear domain or more generally in parallel embedded systems. The verification of their temporal correctness is the core of this thesis. our contribution is mainly in the following three points: the analysis and extension of a programming model for such real-time parallel systems; the proposal of an original method based on a new operator of synchronized product of state machines task-graphs; the validation of the approach by its implementation and evaluation. The work addresses particularly the main problem of optimal task mapping on a parallel architecture, such that the temporal constraints are globally guaranteed, i.e. the timeliness property is valid. The results incorporate also optimally criteria for the sizing and correct dimensioning of a parallel system, for instance in the number of processing elements. These criteria are connected with operational constraints of the application domain. Our approach is based on the off-line analysis of the feasibility of the deadline-driven dynamic scheduling that is used to schedule tasks inside one processor. This leads us to define the synchronized-product, a system of linear, constraints is automatically generated and then allows to calculate a maximum load of a group of tasks and then to verify their timeliness constraints. The communications, their timeliness verification and incorporation to the mapping problem is the second main contribution of this thesis. FInally, the global solving technique dealing with both task and communication aspects has been implemented and evaluated in the framework of the OASIS project in the LETI research center at the CEA/Saclay. (author) 96 refs.
Time optimized path-choice in the termite hunting ant Megaponera analis.
Frank, Erik T; Hönle, Philipp O; Linsenmair, K Eduard
2018-05-10
Trail network systems among ants have received a lot of scientific attention due to their various applications in problem solving of networks. Recent studies have shown that ants select the fastest available path when facing different velocities on different substrates, rather than the shortest distance. The progress of decision-making by these ants is determined by pheromone-based maintenance of paths, which is a collective decision. However, path optimization through individual decision-making remains mostly unexplored. Here we present the first study of time-optimized path selection via individual decision-making by scout ants. Megaponera analis scouts search for termite foraging sites and lead highly organized raid columns to them. The path of the scout determines the path of the column. Through installation of artificial roads around M. analis nests we were able to influence the pathway choice of the raids. After road installation 59% of all recorded raids took place completely or partly on the road, instead of the direct, i.e. distance-optimized, path through grass from the nest to the termites. The raid velocity on the road was more than double the grass velocity, the detour thus saved 34.77±23.01% of the travel time compared to a hypothetical direct path. The pathway choice of the ants was similar to a mathematical model of least time allowing us to hypothesize the underlying mechanisms regulating the behavior. Our results highlight the importance of individual decision-making in the foraging behavior of ants and show a new procedure of pathway optimization. © 2018. Published by The Company of Biologists Ltd.
Joint optimization of green vehicle scheduling and routing problem with time-varying speeds
Zhang, Dezhi; Wang, Xin; Ni, Nan; Zhang, Zhuo
2018-01-01
Based on an analysis of the congestion effect and changes in the speed of vehicle flow during morning and evening peaks in a large- or medium-sized city, the piecewise function is used to capture the rules of the time-varying speed of vehicles, which are very important in modelling their fuel consumption and CO2 emission. A joint optimization model of the green vehicle scheduling and routing problem with time-varying speeds is presented in this study. Extra wages during nonworking periods and soft time-window constraints are considered. A heuristic algorithm based on the adaptive large neighborhood search algorithm is also presented. Finally, a numerical simulation example is provided to illustrate the optimization model and its algorithm. Results show that, (1) the shortest route is not necessarily the route that consumes the least energy, (2) the departure time influences the vehicle fuel consumption and CO2 emissions and the optimal departure time saves on fuel consumption and reduces CO2 emissions by up to 5.4%, and (3) extra driver wages have significant effects on routing and departure time slot decisions. PMID:29466370
Modified mean generation time parameter in the neutron point kinetics equations
Energy Technology Data Exchange (ETDEWEB)
Diniz, Rodrigo C.; Gonçalves, Alessandro C.; Rosa, Felipe S.S., E-mail: alessandro@nuclear.ufrj.br, E-mail: frosa@if.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)
2017-07-01
This paper proposes an approximation for the modified point kinetics equations proposed by NUNES et. al, 2015, through the adjustment of a kinetic parameter. This approximation consists of analyzing the terms of the modified point kinetics equations in order to identify the least important ones for the solution, resulting in a modification of the mean generation time parameter that incorporates all influences of the additional terms of the modified kinetics. This approximation is applied on the inverse kinetics, to compare the results with the inverse kinetics from the modified kinetics in order to validate the proposed model. (author)
Modified mean generation time parameter in the neutron point kinetics equations
International Nuclear Information System (INIS)
Diniz, Rodrigo C.; Gonçalves, Alessandro C.; Rosa, Felipe S.S.
2017-01-01
This paper proposes an approximation for the modified point kinetics equations proposed by NUNES et. al, 2015, through the adjustment of a kinetic parameter. This approximation consists of analyzing the terms of the modified point kinetics equations in order to identify the least important ones for the solution, resulting in a modification of the mean generation time parameter that incorporates all influences of the additional terms of the modified kinetics. This approximation is applied on the inverse kinetics, to compare the results with the inverse kinetics from the modified kinetics in order to validate the proposed model. (author)
Time regimes optimization of the activation-measurement cycle in neutron activation analysis
International Nuclear Information System (INIS)
Szopa, Z.
1986-01-01
Criteria of the optimum time conditions of the activation-measurement cycle in neutron activation analysis have been formulated. The optimized functions i.e. the relative precision or the factor of ''merit'' of the analytical signal measured as functions of the cycle time parameters have been proposed. The structure and possibilities of the optimizing programme STOPRC have been presented. This programme is completely written in FORTRAN and takes advantage of the library of standard spectra and fast, stochastic algorithms. The time conditions predicted with the aid of the programme have been discussed and compared with the experimental results for the case of the determination of tungsten in industrial dusts. 31 refs., 4 figs. (author)
Deconinck, F.; van Polanen, V.; Savelsbergh, G.J.P.; Bennett, S.
2011-01-01
The present study examined the effect of timing constraints and advance knowledge on eye-hand coordination strategy in a sequential pointing task. Participants were required to point at two successively appearing targets on a screen while the inter-stimulus interval (ISI) and the trial order were
Energy Technology Data Exchange (ETDEWEB)
Larbes, C.; Ait Cheikh, S.M.; Obeidi, T.; Zerguerras, A. [Laboratoire des Dispositifs de Communication et de Conversion Photovoltaique, Departement d' Electronique, Ecole Nationale Polytechnique, 10, Avenue Hassen Badi, El Harrach, Alger 16200 (Algeria)
2009-10-15
This paper presents an intelligent control method for the maximum power point tracking (MPPT) of a photovoltaic system under variable temperature and irradiance conditions. First, for the purpose of comparison and because of its proven and good performances, the perturbation and observation (P and O) technique is briefly introduced. A fuzzy logic controller based MPPT (FLC) is then proposed which has shown better performances compared to the P and O MPPT based approach. The proposed FLC has been also improved using genetic algorithms (GA) for optimisation. Different development stages are presented and the optimized fuzzy logic MPPT controller (OFLC) is then simulated and evaluated, which has shown better performances. (author)
Portable Dew Point Mass Spectrometry System for Real-Time Gas and Moisture Analysis
Arkin, C.; Gillespie, Stacey; Ratzel, Christopher
2010-01-01
A portable instrument incorporates both mass spectrometry and dew point measurement to provide real-time, quantitative gas measurements of helium, nitrogen, oxygen, argon, and carbon dioxide, along with real-time, quantitative moisture analysis. The Portable Dew Point Mass Spectrometry (PDP-MS) system comprises a single quadrupole mass spectrometer and a high vacuum system consisting of a turbopump and a diaphragm-backing pump. A capacitive membrane dew point sensor was placed upstream of the MS, but still within the pressure-flow control pneumatic region. Pressure-flow control was achieved with an upstream precision metering valve, a capacitance diaphragm gauge, and a downstream mass flow controller. User configurable LabVIEW software was developed to provide real-time concentration data for the MS, dew point monitor, and sample delivery system pressure control, pressure and flow monitoring, and recording. The system has been designed to include in situ, NIST-traceable calibration. Certain sample tubing retains sufficient water that even if the sample is dry, the sample tube will desorb water to an amount resulting in moisture concentration errors up to 500 ppm for as long as 10 minutes. It was determined that Bev-A-Line IV was the best sample line to use. As a result of this issue, it is prudent to add a high-level humidity sensor to PDP-MS so such events can be prevented in the future.
Optimization of time and location dependent spent nuclear fuel storage capacity
International Nuclear Information System (INIS)
Macek, V.
1977-01-01
A linear spent fuel storage model is developed to identify cost-effective spent nuclear fuel storage strategies. The purpose of this model is to provide guidelines for the implementation of the optimal time-dependent spent fuel storage capacity expansion in view of the current economic and regulatory environment which has resulted in phase-out of the closed nuclear fuel cycle. Management alternatives of the spent fuel storage backlog, which is created by mismatch between spent fuel generation rate and spent fuel disposition capability, are represented by aggregate decision variables which describe the time dependent on-reactor-site and off-site spent fuel storage capacity additions, and the amount of spent fuel transferred to off-site storage facilities. Principal constraints of the model assure determination of cost optimal spent fuel storage expansion strategies, while spent fuel storage requirements are met at all times. A detailed physical and economic analysis of the essential components of the spent fuel storage problem, which precedes the model development, assures its realism. The effects of technological limitations on the on-site spent fuel storage expansion and timing of reinitiation of the spent fuel reprocessing on optimal spent fuel storage capacity expansion are investigated. The principal results of the study indicate that (a) expansion of storage capacity beyond that of currently planned facilities is necessary, and (b) economics of the post-reactor fuel cycle is extremely sensitive to the timing of reinitiation of spent fuel reprocessing. Postponement of reprocessing beyond mid-1982 may result in net negative economic liability of the back end of the nuclear fuel cycle
A two-layer recurrent neural network for nonsmooth convex optimization problems.
Qin, Sitian; Xue, Xiaoping
2015-06-01
In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.
Singh, Trailokyanath; Mishra, Pandit Jagatananda; Pattanayak, Hadibandhu
2017-12-01
In this paper, an economic order quantity (EOQ) inventory model for a deteriorating item is developed with the following characteristics: (i) The demand rate is deterministic and two-staged, i.e., it is constant in first part of the cycle and linear function of time in the second part. (ii) Deterioration rate is time-proportional. (iii) Shortages are not allowed to occur. The optimal cycle time and the optimal order quantity have been derived by minimizing the total average cost. A simple solution procedure is provided to illustrate the proposed model. The article concludes with a numerical example and sensitivity analysis of various parameters as illustrations of the theoretical results.
Puelacher, Christian; Wagener, Max; Abächerli, Roger; Honegger, Ursina; Lhasam, Nundsin; Schaerli, Nicolas; Prêtre, Gil; Strebel, Ivo; Twerenbold, Raphael; Boeddinghaus, Jasper; Nestelberger, Thomas; Rubini Giménez, Maria; Hillinger, Petra; Wildi, Karin; Sabti, Zaid; Badertscher, Patrick; Cupa, Janosch; Kozhuharov, Nikola; du Fay de Lavallaz, Jeanne; Freese, Michael; Roux, Isabelle; Lohrmann, Jens; Leber, Remo; Osswald, Stefan; Wild, Damian; Zellweger, Michael J; Mueller, Christian; Reichlin, Tobias
2017-07-01
Exercise ECG stress testing is the most widely available method for evaluation of patients with suspected myocardial ischemia. Its major limitation is the relatively poor accuracy of ST-segment changes regarding ischemia detection. Little is known about the optimal method to assess ST-deviations. A total of 1558 consecutive patients undergoing bicycle exercise stress myocardial perfusion imaging (MPI) were enrolled. Presence of inducible myocardial ischemia was adjudicated using MPI results. The diagnostic value of ST-deviations for detection of exercise-induced myocardial ischemia was systematically analyzed 1) for each individual lead, 2) at three different intervals after the J-point (J+40ms, J+60ms, J+80ms), and 3) at different time points during the test (baseline, maximal workload, 2min into recovery). Exercise-induced ischemia was detected in 481 (31%) patients. The diagnostic accuracy of ST-deviations was highest at +80ms after the J-point, and at 2min into recovery. At this point, ST-amplitude showed an AUC of 0.63 (95% CI 0.59-0.66) for the best-performing lead I. The combination of ST-amplitude and ST-slope in lead I did not increase the AUC. Lead I reached a sensitivity of 37% and a specificity of 83%, with similar sensitivity to manual ECG analysis (34%, p=0.31) but lower specificity (90%, pST-deviations is highest when evaluated at +80ms after the J-point, and at 2min into recovery. Copyright © 2017 Elsevier B.V. All rights reserved.
Design of an optimization algorithm for clinical use
International Nuclear Information System (INIS)
Gustafsson, Anders
1995-01-01
Radiation therapy optimization has received much attention in the past few years. In combination with biological objective functions, the different optimization schemes has shown a potential to considerably increase the treatment outcome. With improved radiobiological models and increased computer capacity, radiation therapy optimization has now reached a stage where implementation in a clinical treatment planning system is realistic. A radiation therapy optimization method has been investigated with respect to its feasibility as a tool in a clinical 3D treatment planning system. The optimization algorithm is a constrained iterative gradient method. Photon dose calculation is performed using the clinically validated pencil-beam based algorithm of the clinical treatment planning system. Dose calculation within the optimization scheme is very time consuming and measures are required to decrease the calculation time. Different methods for more effective dose calculation within the optimization scheme have been investigated. The optimization results for adaptive sampling of calculation points, and secondary effect approximations in the dose calculation algorithm are compared with the optimization result for accurate dose calculation in all voxels of interest
Optimal management of non-Markovian biological populations
Williams, B.K.
2007-01-01
Wildlife populations typically are described by Markovian models, with population dynamics influenced at each point in time by current but not previous population levels. Considerable work has been done on identifying optimal management strategies under the Markovian assumption. In this paper we generalize this work to non-Markovian systems, for which population responses to management are influenced by lagged as well as current status and/or controls. We use the maximum principle of optimal control theory to derive conditions for the optimal management such a system, and illustrate the effects of lags on the structure of optimal habitat strategies for a predator-prey system.
An Efficient Algorithm for the Optimal Market Timing over Two Stocks
Institute of Scientific and Technical Information of China (English)
Hui Li; Hong-zhi An; Guo-fu Wu
2004-01-01
In this paper,the optimal trading strategy in timing the market by switching between two stocks is given.In order to deal with a large sample size with a fast turnaround computation time,we propose a class of recursive algorithm.A simulation is given to verify the efiectiveness of our method.
Regis, Rommel G.
2014-02-01
This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.
Real-time process optimization based on grey-box neural models
Directory of Open Access Journals (Sweden)
F. A. Cubillos
2007-09-01
Full Text Available This paper investigates the feasibility of using grey-box neural models (GNM in Real Time Optimization (RTO. These models are based on a suitable combination of fundamental conservation laws and neural networks, being used in at least two different ways: to complement available phenomenological knowledge with empirical information, or to reduce dimensionality of complex rigorous physical models. We have observed that the benefits of using these simple adaptable models are counteracted by some difficulties associated with the solution of the optimization problem. Nonlinear Programming (NLP algorithms failed in finding the global optimum due to the fact that neural networks can introduce multimodal objective functions. One alternative considered to solve this problem was the use of some kind of evolutionary algorithms, like Genetic Algorithms (GA. Although these algorithms produced better results in terms of finding the appropriate region, they took long periods of time to reach the global optimum. It was found that a combination of genetic and nonlinear programming algorithms can be use to fast obtain the optimum solution. The proposed approach was applied to the Williams-Otto reactor, considering three different GNM models of increasing complexity. Results demonstrated that the use of GNM models and mixed GA/NLP optimization algorithms is a promissory approach for solving dynamic RTO problems.
Dynamic Value Engineering Method Optimizing the Risk on Real Time Operating System
Directory of Open Access Journals (Sweden)
Prashant Kumar Patra
2014-04-01
Full Text Available The value engineering is the umbrella of the many more sub-system like quality assurance, quality control, quality function design and development for manufacturability. The system engineering & value engineering is two part of the coin. The value engineering is the high level of technology management for every aspect of engineering fields. The value engineering is the high utilization of System Product (i.e. Processor, Memory & Encryption key, Services, Business and Resources at minimal cost. The high end operating system providing highest services at optimal cost & time. The value engineering provides the maximum performance, accountability, reliability, integrity and availability of processor, memory, encryption key and other inter dependency sub-components. The value engineering is the ratio of the maximum functionality of individual components to the optimal cost. VE=k [(P, M, E, C, A]/optimal cost. Where k is the proportionality constant. The VE is directly proportional to performance of individual components and inversely proportional to the minimal cost. The VE is directly proportional to the risk assessment. The VE maximize the business throughput & decision process mean while minimize the risk and down time. We have to develop the dynamic value engineering model & mechanism for risk optimization over a complex real time operating system This proposed composition model definite will be resolve our objective at top high level. Product
[SIAM conference on optimization
Energy Technology Data Exchange (ETDEWEB)
1992-05-10
Abstracts are presented of 63 papers on the following topics: large-scale optimization, interior-point methods, algorithms for optimization, problems in control, network optimization methods, and parallel algorithms for optimization problems.
Space and time optimization of nuclear reactors by means of the Pontryagin principle
International Nuclear Information System (INIS)
Anton, V.
1979-01-01
A numerical method is being presented for solving space dependent optimization problems concerning a functional for one dimensional geometries in the few group diffusion approximation. General dimensional analysis was applied to derive relations for the maximum of a functional and the limiting values of the constraints. Two procedures were given for calculating the anisotropic diffusion coefficients in order to improve the results of the diffusion approximation. In this work two procedures were presented for collapsing the microscopic multigroup cross sections, one general and another specific to the space dependent optimization problems solved by means of the Pontryagin maximum principle. Neutron spectrum optimization is performed to ensure the burnup of Pu 239 isotope produced in a thermal nuclear reactor. A procedure is also given for the minimization of finite functional set by means of the Pontryagin maximum principle. A method for determining the characteristics of fission Pseudo products is formulated in one group and multigroup cases. This method is applied in the optimization of the burnup in nuclear reactors with fuel electric cells. A procedure to mjnimze the number of the fuel burnup equations is described. The optimization problems presented and solved in this work point to the efficiency of the maximum principle. Each problem on method presented in the various chapters is accompanied by considerations concerning dual problems and possibilities of further research development. (author)
Analyzing survival curves at a fixed point in time for paired and clustered right-censored data
Su, Pei-Fang; Chi, Yunchan; Lee, Chun-Yi; Shyr, Yu; Liao, Yi-De
2018-01-01
In clinical trials, information about certain time points may be of interest in making decisions about treatment effectiveness. Rather than comparing entire survival curves, researchers can focus on the comparison at fixed time points that may have a clinical utility for patients. For two independent samples of right-censored data, Klein et al. (2007) compared survival probabilities at a fixed time point by studying a number of tests based on some transformations of the Kaplan-Meier estimators of the survival function. However, to compare the survival probabilities at a fixed time point for paired right-censored data or clustered right-censored data, their approach would need to be modified. In this paper, we extend the statistics to accommodate the possible within-paired correlation and within-clustered correlation, respectively. We use simulation studies to present comparative results. Finally, we illustrate the implementation of these methods using two real data sets. PMID:29456280
Ramdas, Wishal D.; Rizopoulos, Dimitris; Wolfs, Roger C. W.; Hofman, Albert; de Jong, Paulus T. V. M.; Vingerling, Johannes R.; Jansonius, Nomdo M.
2011-01-01
Purpose: Diseases characterized by a continuous trait can be defined by setting a cut-off point for the disease measure in question, accepting some misclassification. The 97.5th percentile is commonly used as a cut-off point. However, it is unclear whether this percentile is the optimal cut-off
Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.
Wei, Qinglai; Li, Benkai; Song, Ruizhuo
2018-04-01
In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.
Optimal Investment Timing and Size of a Logistics Park: A Real Options Perspective
Directory of Open Access Journals (Sweden)
Dezhi Zhang
2017-01-01
Full Text Available This paper uses a real options approach to address optimal timing and size of a logistics park investment with logistics demand volatility. Two important problems are examined: when should an investment be introduced, and what size should it be? A real option model is proposed to explicitly incorporate the effect of government subsidies on logistics park investment. Logistic demand that triggers the threshold for investment in a logistics park project is explored analytically. Comparative static analyses of logistics park investment are also carried out. Our analytical results show that (1 investors will select smaller sized logistics parks and prepone the investment if government subsidies are considered; (2 the real option will postpone the optimal investment timing of logistics parks compared with net present value approach; and (3 logistic demands can significantly affect the optimal investment size and timing of logistics park investment.
On test and maintenance: Optimization of allowed outage time
International Nuclear Information System (INIS)
Mavko, B.; Cepin, M.T.
2000-01-01
Probabilistic Safety Assessment is widely becoming standard method for assessing, maintaining, assuring and improving the nuclear power plant safety. To achieve one of its many potential benefits, the optimization of allowed outage time specified in technical specifications is investigated. Proposed is the risk comparison approach for evaluation of allowed outage time. The risk of shutting the plant down due to failure of certain equipment is compared to the risk of continued plant operation with the specified equipment down. The core damage frequency serves as a risk measure. (author)
Reliable Rescue Routing Optimization for Urban Emergency Logistics under Travel Time Uncertainty
Directory of Open Access Journals (Sweden)
Qiuping Li
2018-02-01
Full Text Available The reliability of rescue routes is critical for urban emergency logistics during disasters. However, studies on reliable rescue routing under stochastic networks are still rare. This paper proposes a multiobjective rescue routing model for urban emergency logistics under travel time reliability. A hybrid metaheuristic integrating ant colony optimization (ACO and tabu search (TS was designed to solve the model. An experiment optimizing rescue routing plans under a real urban storm event, was carried out to validate the proposed model. The experimental results showed how our approach can improve rescue efficiency with high travel time reliability.
Improving multi-GNSS ultra-rapid orbit determination for real-time precise point positioning
Li, Xingxing; Chen, Xinghan; Ge, Maorong; Schuh, Harald
2018-03-01
Currently, with the rapid development of multi-constellation Global Navigation Satellite Systems (GNSS), the real-time positioning and navigation are undergoing dramatic changes with potential for a better performance. To provide more precise and reliable ultra-rapid orbits is critical for multi-GNSS real-time positioning, especially for the three merging constellations Beidou, Galileo and QZSS which are still under construction. In this contribution, we present a five-system precise orbit determination (POD) strategy to fully exploit the GPS + GLONASS + BDS + Galileo + QZSS observations from CDDIS + IGN + BKG archives for the realization of hourly five-constellation ultra-rapid orbit update. After adopting the optimized 2-day POD solution (updated every hour), the predicted orbit accuracy can be obviously improved for all the five satellite systems in comparison to the conventional 1-day POD solution (updated every 3 h). The orbit accuracy for the BDS IGSO satellites can be improved by about 80, 45 and 50% in the radial, cross and along directions, respectively, while the corresponding accuracy improvement for the BDS MEO satellites reaches about 50, 20 and 50% in the three directions, respectively. Furthermore, the multi-GNSS real-time precise point positioning (PPP) ambiguity resolution has been performed by using the improved precise satellite orbits. Numerous results indicate that combined GPS + BDS + GLONASS + Galileo (GCRE) kinematic PPP ambiguity resolution (AR) solutions can achieve the shortest time to first fix (TTFF) and highest positioning accuracy in all coordinate components. With the addition of the BDS, GLONASS and Galileo observations to the GPS-only processing, the GCRE PPP AR solution achieves the shortest average TTFF of 11 min with 7{°} cutoff elevation, while the TTFF of GPS-only, GR, GE and GC PPP AR solution is 28, 15, 20 and 17 min, respectively. As the cutoff elevation increases, the reliability and accuracy of GPS-only PPP AR solutions
Developing an optimal valve closing rule curve for real-time pressure control in pipes
Energy Technology Data Exchange (ETDEWEB)
Bazarganlari, Mohammad Reza; Afshar, Hossein [Islamic Azad University, Tehran (Iran, Islamic Republic of); Kerachian, Reza [University of Tehran, Tehran (Iran, Islamic Republic of); Bashiazghadi, Seyyed Nasser [Iran University of Science and Technology, Tehran (Iran, Islamic Republic of)
2013-01-15
Sudden valve closure in pipeline systems can cause high pressures that may lead to serious damages. Using an optimal valve closing rule can play an important role in managing extreme pressures in sudden valve closure. In this paper, an optimal closing rule curve is developed using a multi-objective optimization model and Bayesian networks (BNs) for controlling water pressure in valve closure instead of traditional step functions or single linear functions. The method of characteristics is used to simulate transient flow caused by valve closure. Non-dominated sorting genetic algorithms-II is also used to develop a Pareto front among three objectives related to maximum and minimum water pressures, and the amount of water passes through the valve during the valve-closing process. Simulation and optimization processes are usually time-consuming, thus results of the optimization model are used for training the BN. The trained BN is capable of determining optimal real-time closing rules without running costly simulation and optimization models. To demonstrate its efficiency, the proposed methodology is applied to a reservoir-pipe-valve system and the optimal closing rule curve is calculated for the valve. The results of the linear and BN-based valve closure rules show that the latter can significantly reduce the range of variations in water hammer pressures.
Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems
Tobasco, Ian; Goluskin, David; Doering, Charles R.
2018-02-01
For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.
Optimal post-warranty maintenance policy with repair time threshold for minimal repair
International Nuclear Information System (INIS)
Park, Minjae; Mun Jung, Ki; Park, Dong Ho
2013-01-01
In this paper, we consider a renewable minimal repair–replacement warranty policy and propose an optimal maintenance model after the warranty is expired. Such model adopts the repair time threshold during the warranty period and follows with a certain type of system maintenance policy during the post-warranty period. As for the criteria for optimality, we utilize the expected cost rate per unit time during the life cycle of the system, which has been frequently used in many existing maintenance models. Based on the cost structure defined for each failure of the system, we formulate the expected cost rate during the life cycle of the system, assuming that a renewable minimal repair–replacement warranty policy with the repair time threshold is provided to the user during the warranty period. Once the warranty is expired, the maintenance of the system is the user's sole responsibility. The life cycle of the system is defined on the perspective of the user and the expected cost rate per unit time is derived in this context. We obtain the optimal maintenance policy during the maintenance period following the expiration of the warranty period by minimizing such a cost rate. Numerical examples using actual failure data are presented to exemplify the applicability of the methodologies proposed in this paper.
Ahmet Demir; Utku Kose
2016-01-01
ABSTRACT In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Sc...
Expanded GDoF-optimality Regime of Treating Interference as Noise in the $M\\times 2$ X-Channel
Gherekhloo, Soheil
2016-11-14
Treating interference as noise (TIN) as the most appropriate approach in dealing with interference and the conditions on its optimality has attracted the interest of researchers recently. However, our knowledge on necessary and sufficient conditions of TIN is restricted to a few setups with limited number of users. In this paper, we study the optimality of TIN in terms of the generalized degrees of freedom (GDoF) for a fundamental network, namely, the M× 2 X-channel. To this end, the achievable GDoF of TIN with power allocations at the transmitters is studied. It turns out that the transmit power allocation maximizing the achievable GDOF is given by on-off signaling as long as the receivers use TIN. This leads to two variants of TIN, namely, P2P-TIN and 2-IC-TIN. While in the first variant the M× 2 X-channel is reduced to a point-to-point (P2P) channel, in the second variant, the setup is reduced to a two-user interference channel in which the receivers use TIN. The optimality of these two variants is studied separately. To this end, novel genie-aided upper bounds on the capacity of the X-channel are established. The conditions on the optimality of P2P-TIN can be summarized as follows. P2P-TIN is GDoF-optimal if there exists a dominant multiple access channel or a dominant broadcast channel embedded in the X channel. Furthermore, the necessary and sufficient conditions on the GDoF-optimality of 2-IC-TIN are presented. Interestingly, it turns out that operating the M× 2 X-channel in the 2-IC-TIN mode might be still GDOF optimal, although the conditions given by Geng et al. are violated. However, 2-IC-TIN is sub-optimal if there exists a single interferer which causes sufficiently strong interference at both receivers. The comparison of the results with the state of the art shows that the GDOF optimality of TIN is expanded significantldy.
Fuzzy Multiobjective Traffic Light Signal Optimization
Directory of Open Access Journals (Sweden)
N. Shahsavari Pour
2013-01-01
Full Text Available Traffic congestion is a major concern for many cities throughout the world. In a general traffic light controller, the traffic lights change at a constant cycle time. Hence it does not provide an optimal solution. Many traffic light controllers in current use are based on the “time-of-the-day” scheme, which use a limited number of predetermined traffic light patterns and implement these patterns depending upon the time of the day. These automated systems do not provide an optimal control for fluctuating traffic volumes. In this paper, the fuzzy traffic light controller is used to optimize the control of fluctuating traffic volumes such as oversaturated or unusual load conditions. The problem is solved by genetic algorithm, and a new defuzzification method is introduced. The performance of the new defuzzification method (NDM is compared with the centroid point defuzzification method (CPDM by using ANOVA. Finally, an illustrative example is presented to show the competency of proposed algorithm.
Dynamical System Approaches to Combinatorial Optimization
DEFF Research Database (Denmark)
Starke, Jens
2013-01-01
of large times as an asymptotically stable point of the dynamics. The obtained solutions are often not globally optimal but good approximations of it. Dynamical system and neural network approaches are appropriate methods for distributed and parallel processing. Because of the parallelization......Several dynamical system approaches to combinatorial optimization problems are described and compared. These include dynamical systems derived from penalty methods; the approach of Hopfield and Tank; self-organizing maps, that is, Kohonen networks; coupled selection equations; and hybrid methods...... thereof can be used as models for many industrial problems like manufacturing planning and optimization of flexible manufacturing systems. This is illustrated for an example in distributed robotic systems....
On the design of a radix-10 online floating-point multiplier
McIlhenny, Robert D.; Ercegovac, Milos D.
2009-08-01
This paper describes an approach to design and implement a radix-10 online floating-point multiplier. An online approach is considered because it offers computational flexibility not available with conventional arithmetic. The design was coded in VHDL and compiled, synthesized, and mapped onto a Virtex 5 FPGA to measure cost in terms of LUTs (look-up-tables) as well as the cycle time and total latency. The routing delay which was not optimized is the major component in the cycle time. For a rough estimate of the cost/latency characteristics, our design was compared to a standard radix-2 floating-point multiplier of equivalent precision. The results demonstrate that even an unoptimized radix-10 online design is an attractive implementation alternative for FPGA floating-point multiplication.
Xu, Guan; Yuan, Jing; Li, Xiaotao; Su, Jian
2017-08-01
Vision measurement on the basis of structured light plays a significant role in the optical inspection research. The 2D target fixed with a line laser projector is designed to realize the transformations among the world coordinate system, the camera coordinate system and the image coordinate system. The laser projective point and five non-collinear points that are randomly selected from the target are adopted to construct a projection invariant. The closed form solutions of the 3D laser points are solved by the homogeneous linear equations generated from the projection invariants. The optimization function is created by the parameterized re-projection errors of the laser points and the target points in the image coordinate system. Furthermore, the nonlinear optimization solutions of the world coordinates of the projection points, the camera parameters and the lens distortion coefficients are contributed by minimizing the optimization function. The accuracy of the 3D reconstruction is evaluated by comparing the displacements of the reconstructed laser points with the actual displacements. The effects of the image quantity, the lens distortion and the noises are investigated in the experiments, which demonstrate that the reconstruction approach is effective to contribute the accurate test in the measurement system.
A Research on Fast Face Feature Points Detection on Smart Mobile Devices
Directory of Open Access Journals (Sweden)
Xiaohe Li
2018-01-01
Full Text Available We explore how to leverage the performance of face feature points detection on mobile terminals from 3 aspects. First, we optimize the models used in SDM algorithms via PCA and Spectrum Clustering. Second, we propose an evaluation criterion using Linear Discriminative Analysis to choose the best local feature descriptions which plays a critical role in feature points detection. Third, we take advantage of multicore architecture of mobile terminal and parallelize the optimized SDM algorithm to improve the efficiency further. The experiment observations show that our final accomplished GPC-SDM (improved Supervised Descent Method using spectrum clustering, PCA, and GPU acceleration suppresses the memory usage, which is beneficial and efficient to meet the real-time requirements.
Using real time traveler demand data to optimize commuter rail feeder systems.
2012-08-01
"This report focuses on real time optimization of the Commuter Rail Circulator Route Network Design Problem (CRCNDP). The route configuration of the circulator system where to stop and the route among the stops is determined on a real-time ba...
Changing the values of parameters on lot size reorder point model
Directory of Open Access Journals (Sweden)
Chang Hung-Chi
2003-01-01
Full Text Available The Just-In-Time (JIT philosophy has received a great deal of attention. Several actions such as improving quality, reducing setup cost and shortening lead time have been recognized as effective ways to achieve the underlying goal of JIT. This paper considers the partial backorders, lot size reorder point inventory system with an imperfect production process. The objective is to simultaneously optimize the lot size, reorder point, process quality, setup cost and lead time, constrained on a service level. We assume the explicit distributional form of lead time demand is unknown but the mean and standard deviation are given. The minimax distribution free approach is utilized to solve the problem and a numerical example is provided to illustrate the results. .
Numerical instability of time-discretized one-point kinetic equations
International Nuclear Information System (INIS)
Hashimoto, Kengo; Ikeda, Hideaki; Takeda, Toshikazu
2000-01-01
The one-point kinetic equations with numerical errors induced by the explicit, implicit and Crank-Nicolson integration methods are derived. The zero-power transfer functions based on the present equations are demonstrated to investigate the numerical stability of the discretized systems. These demonstrations indicate unconditional stability for the implicit and Crank-Nicolson methods but present the possibility of numerical instability for the explicit method. An upper limit of time mesh spacing for the stability is formulated and several numerical calculations are made to confirm the validity of this formula
International Nuclear Information System (INIS)
Doyle, E.K.; Jardine, A.K.S.
2001-01-01
The use of various maintenance optimization techniques at Bruce has lead to cost effective preventive maintenance applications for complex systems. As previously reported at ICONE 6 in New Orleans, 1996, several innovative practices reduced Reliability Centered Maintenance costs while maintaining the accuracy of the analysis. The optimization strategy has undergone further evolution and at the present an Integrated Maintenance Program (IMP) is in place where an Expert Panel consisting of all players/experts proceed through each system in a disciplined fashion and reach agreement on all items under a rigorous time frame. It is well known that there are essentially 3 maintenance based actions that can flow from a Maintenance Optimization Analysis: condition based maintenance, time based maintenance and time based discard. The present effort deals with time based discard decisions. Maintenance data from the Remote On-Power Fuel Changing System was used. (author)
Optimal Compensation with Hidden Action and Lump-Sum Payment in a Continuous-Time Model
International Nuclear Information System (INIS)
Cvitanic, Jaksa; Wan, Xuhu; Zhang Jianfeng
2009-01-01
We consider a problem of finding optimal contracts in continuous time, when the agent's actions are unobservable by the principal, who pays the agent with a one-time payoff at the end of the contract. We fully solve the case of quadratic cost and separable utility, for general utility functions. The optimal contract is, in general, a nonlinear function of the final outcome only, while in the previously solved cases, for exponential and linear utility functions, the optimal contract is linear in the final output value. In a specific example we compute, the first-best principal's utility is infinite, while it becomes finite with hidden action, which is increasing in value of the output. In the second part of the paper we formulate a general mathematical theory for the problem. We apply the stochastic maximum principle to give necessary conditions for optimal contracts. Sufficient conditions are hard to establish, but we suggest a way to check sufficiency using non-convex optimization
Directory of Open Access Journals (Sweden)
Jinjun Tang
Full Text Available Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN, two learning processes are proposed: (1 a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2 a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE, root mean square error (RMSE, and mean absolute relative error (MARE are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR, instantaneous model (IM, linear model (LM, neural network (NN, and cumulative plots (CP.
Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai
2016-01-01
Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP).
A Class of Prediction-Correction Methods for Time-Varying Convex Optimization
Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro
2016-09-01
This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.
Extensions of Dynamic Programming: Decision Trees, Combinatorial Optimization, and Data Mining
Hussain, Shahid
2016-01-01
This thesis is devoted to the development of extensions of dynamic programming to the study of decision trees. The considered extensions allow us to make multi-stage optimization of decision trees relative to a sequence of cost functions, to count the number of optimal trees, and to study relationships: cost vs cost and cost vs uncertainty for decision trees by construction of the set of Pareto-optimal points for the corresponding bi-criteria optimization problem. The applications include study of totally optimal (simultaneously optimal relative to a number of cost functions) decision trees for Boolean functions, improvement of bounds on complexity of decision trees for diagnosis of circuits, study of time and memory trade-off for corner point detection, study of decision rules derived from decision trees, creation of new procedure (multi-pruning) for construction of classifiers, and comparison of heuristics for decision tree construction. Part of these extensions (multi-stage optimization) was generalized to well-known combinatorial optimization problems: matrix chain multiplication, binary search trees, global sequence alignment, and optimal paths in directed graphs.
Extensions of Dynamic Programming: Decision Trees, Combinatorial Optimization, and Data Mining
Hussain, Shahid
2016-07-10
This thesis is devoted to the development of extensions of dynamic programming to the study of decision trees. The considered extensions allow us to make multi-stage optimization of decision trees relative to a sequence of cost functions, to count the number of optimal trees, and to study relationships: cost vs cost and cost vs uncertainty for decision trees by construction of the set of Pareto-optimal points for the corresponding bi-criteria optimization problem. The applications include study of totally optimal (simultaneously optimal relative to a number of cost functions) decision trees for Boolean functions, improvement of bounds on complexity of decision trees for diagnosis of circuits, study of time and memory trade-off for corner point detection, study of decision rules derived from decision trees, creation of new procedure (multi-pruning) for construction of classifiers, and comparison of heuristics for decision tree construction. Part of these extensions (multi-stage optimization) was generalized to well-known combinatorial optimization problems: matrix chain multiplication, binary search trees, global sequence alignment, and optimal paths in directed graphs.
Simplified model-based optimal control of VAV air-conditioning system
Energy Technology Data Exchange (ETDEWEB)
Nassif, N.; Kajl, S.; Sabourin, R. [Ecole de Technologie Superieure, Montreal, PQ (Canada). Dept. of Construction Engineering
2005-07-01
The improvement of Variable Air Volume (VAV) system performance is one of several attempts being made to minimize the high energy use associated with the operation of heating, ventilation and air conditioning (HVAC) systems. A Simplified Optimization Process (SOP) comprised of controller set point strategies and a simplified VAV model was presented in this paper. The aim of the SOP was to determine supply set points. The advantage of the SOP over previous methods was that it did not require a detailed VAV model and optimization program. In addition, the monitored data for representative local-loop control can be checked on-line, after which controller set points can be updated in order to ensure proper operation by opting for real situations with minimum energy use. The SOP was validated using existing monitoring data and a model of an existing VAV system. Energy use simulations were compared to that of the existing VAV system. At each simulation step, 3 controller set point values were proposed and studied using the VAV model in order to select a value for each point which corresponded to the best performance of the VAV system. Simplified VAV component models were presented. Strategies for controller set points were described, including zone air temperature, duct static pressure set points; chilled water supply set points and supply air temperature set points. Simplified optimization process calculations were presented. Results indicated that the SOP provided significant energy savings when applied to specific AHU systems. In a comparison with a Detailed Optimization Process (DOP), the SOP was capable of determining set points close to those obtained by the DOP. However, it was noted that the controller set points determined by the SOP need a certain amount of time to reach optimal values when outdoor conditions or thermal loads are significantly changed. It was suggested that this disadvantage could be overcome by the use of a dynamic incremental value, which
International Nuclear Information System (INIS)
Mundt, Michael; Kuemmel, Stephan
2006-01-01
The integral equation for the time-dependent optimized effective potential (TDOEP) in time-dependent density-functional theory is transformed into a set of partial-differential equations. These equations only involve occupied Kohn-Sham orbitals and orbital shifts resulting from the difference between the exchange-correlation potential and the orbital-dependent potential. Due to the success of an analog scheme in the static case, a scheme that propagates orbitals and orbital shifts in real time is a natural candidate for an exact solution of the TDOEP equation. We investigate the numerical stability of such a scheme. An approximation beyond the Krieger-Li-Iafrate approximation for the time-dependent exchange-correlation potential is analyzed
Optimization of NANOGrav's time allocation for maximum sensitivity to single sources
International Nuclear Information System (INIS)
Christy, Brian; Anella, Ryan; Lommen, Andrea; Camuccio, Richard; Handzo, Emma; Finn, Lee Samuel
2014-01-01
Pulsar timing arrays (PTAs) are a collection of precisely timed millisecond pulsars (MSPs) that can search for gravitational waves (GWs) in the nanohertz frequency range by observing characteristic signatures in the timing residuals. The sensitivity of a PTA depends on the direction of the propagating GW source, the timing accuracy of the pulsars, and the allocation of the available observing time. The goal of this paper is to determine the optimal time allocation strategy among the MSPs in the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) for a single source of GW under a particular set of assumptions. We consider both an isotropic distribution of sources across the sky and a specific source in the Virgo cluster. This work improves on previous efforts by modeling the effect of intrinsic spin noise for each pulsar. We find that, in general, the array is optimized by maximizing time spent on the best-timed pulsars, with sensitivity improvements typically ranging from a factor of 1.5 to 4.
Overlapping quadratic optimal control of linear time-varying commutative systems
Czech Academy of Sciences Publication Activity Database
Bakule, Lubomír; Rodellar, J.; Rossell, J. M.
2002-01-01
Roč. 40, č. 5 (2002), s. 1611-1627 ISSN 0363-0129 R&D Projects: GA AV ČR IAA2075802 Institutional research plan: CEZ:AV0Z1075907 Keywords : overlapping * optimal control * linear time-varying systems Subject RIV: BC - Control Systems Theory Impact factor: 1.441, year: 2002
Time-limited optimal dynamics beyond the Quantum Speed Limit
DEFF Research Database (Denmark)
Gajdacz, Miroslav; Das, Kunal K.; Arlt, Jan
2015-01-01
The quantum speed limit sets the minimum time required to transfer a quantum system completely into a given target state. At shorter times the higher operation speed has to be paid with a loss of fidelity. Here we quantify the trade-off between the fidelity and the duration in a system driven......-off expressed in terms of the direct Hilbert velocity provides a robust prediction of the quantum speed limit and allows to adapt the control optimization such that it yields a predefined fidelity. The results are verified numerically in a multilevel system with a constrained Hamiltonian, and a classification...
Storage Policies and Optimal Shape of a Storage System
Zaerpour, N.; De Koster, René; Yu, Yugang
2013-01-01
The response time of a storage system is mainly influenced by its shape (configuration), the storage assignment and retrieval policies, and the location of the input/output (I/O) points. In this paper, we show that the optimal shape of a storage system, which minimises the response time for single
Utilization of reduced fuelling ripple set in ROP detector layout optimization
International Nuclear Information System (INIS)
Kastanya, Doddy
2012-01-01
Highlights: ► ADORE is an ROP detect layout optimization algorithm in CANDU reactors. ► The effect of using reduced set of fuelling ripples in ADORE is assessed. ► Significant speedup can be realized by adopting this approach. ► The quality of the results is comparable to results from full set of ripples. - Abstract: The ADORE (Alternative Detector layout Optimization for REgional overpower protection system) algorithm for performing the optimization of regional overpower protection (ROP) for CANDU® reactors has been recently developed. This algorithm utilizes the simulated annealing (SA) stochastic optimization technique to come up with an optimized detector layout for the ROP systems. For each history in the SA iteration where a particular detector layout is evaluated, the goodness of this detector layout is measured in terms of its trip set point value which is obtained by performing a probabilistic trip set point calculation using the ROVER-F code. Since during each optimization execution thousands of candidate detector layouts are evaluated, the overall optimization process is time consuming. Since for each ROVER-F evaluation the number of fuelling ripples controls the execution time, reducing the number of fuelling ripples will reduce the overall execution time. This approach has been investigated and the results are presented in this paper. The challenge is to construct a set of representative fuelling ripples which will significantly speedup the optimization process while guaranteeing that the resulting detector layout has similar quality to the ones produced when the complete set of fuelling ripples is employed.
Optimization of Allowed Outage Time and Surveillance Test Intervals
Energy Technology Data Exchange (ETDEWEB)
Al-Dheeb, Mujahed; Kang, Sunkoo; Kim, Jonghyun [KEPCO international nuclear graduate school, Ulsan (Korea, Republic of)
2015-10-15
The primary purpose of surveillance testing is to assure that the components of standby safety systems will be operable when they are needed in an accident. By testing these components, failures can be detected that may have occurred since the last test or the time when the equipment was last known to be operational. The probability a system or system component performs a specified function or mission under given conditions at a prescribed time is called availability (A). Unavailability (U) as a risk measure is just the complementary probability to A(t). The increase of U means the risk is increased as well. D and T have an important impact on components, or systems, unavailability. The extension of D impacts the maintenance duration distributions for at-power operations, making them longer. This, in turn, increases the unavailability due to maintenance in the systems analysis. As for T, overly-frequent surveillances can result in high system unavailability. This is because the system may be taken out of service often due to the surveillance itself and due to the repair of test-caused failures of the component. The test-caused failures include those incurred by wear and tear of the component due to the surveillances. On the other hand, as the surveillance interval increases, the component's unavailability will grow because of increased occurrences of time-dependent random failures. In that situation, the component cannot be relied upon, and accordingly the system unavailability will increase. Thus, there should be an optimal component surveillance interval in terms of the corresponding system availability. This paper aims at finding the optimal T and D which result in minimum unavailability which in turn reduces the risk. Applying the methodology in section 2 to find the values of optimal T and D for two components, i.e., safety injection pump (SIP) and turbine driven aux feedwater pump (TDAFP). Section 4 is addressing interaction between D and T. In general
Optimization of Allowed Outage Time and Surveillance Test Intervals
International Nuclear Information System (INIS)
Al-Dheeb, Mujahed; Kang, Sunkoo; Kim, Jonghyun
2015-01-01
The primary purpose of surveillance testing is to assure that the components of standby safety systems will be operable when they are needed in an accident. By testing these components, failures can be detected that may have occurred since the last test or the time when the equipment was last known to be operational. The probability a system or system component performs a specified function or mission under given conditions at a prescribed time is called availability (A). Unavailability (U) as a risk measure is just the complementary probability to A(t). The increase of U means the risk is increased as well. D and T have an important impact on components, or systems, unavailability. The extension of D impacts the maintenance duration distributions for at-power operations, making them longer. This, in turn, increases the unavailability due to maintenance in the systems analysis. As for T, overly-frequent surveillances can result in high system unavailability. This is because the system may be taken out of service often due to the surveillance itself and due to the repair of test-caused failures of the component. The test-caused failures include those incurred by wear and tear of the component due to the surveillances. On the other hand, as the surveillance interval increases, the component's unavailability will grow because of increased occurrences of time-dependent random failures. In that situation, the component cannot be relied upon, and accordingly the system unavailability will increase. Thus, there should be an optimal component surveillance interval in terms of the corresponding system availability. This paper aims at finding the optimal T and D which result in minimum unavailability which in turn reduces the risk. Applying the methodology in section 2 to find the values of optimal T and D for two components, i.e., safety injection pump (SIP) and turbine driven aux feedwater pump (TDAFP). Section 4 is addressing interaction between D and T. In general
Institute of Scientific and Technical Information of China (English)
LIN; Kuang-Jang; LIN; Chii-Ruey
2010-01-01
The Photovoltaic Array has a best optimal operating point where the array operating can obtain the maximum power.However, the optimal operating point can be compromised by the strength of solar radiation,angle,and by the change of environment and load.Due to the constant changes in these conditions,it has become very difficult to locate the optimal operating point by following a mathematical model.Therefore,this study will focus mostly on the application of Fuzzy Logic Control theory and Three-point Weight Comparison Method in effort to locate the optimal operating point of solar panel and achieve maximum efficiency in power generation. The Three-point Weight Comparison Method is the comparison between the characteristic curves of the voltage of photovoltaic array and output power;it is a rather simple way to track the maximum power.The Fuzzy Logic Control,on the other hand,can be used to solve problems that cannot be effectively dealt with by calculation rules,such as concepts,contemplation, deductive reasoning,and identification.Therefore,this paper uses these two kinds of methods to make simulation successively. The simulation results show that,the Three-point Comparison Method is more effective under the environment with more frequent change of solar radiation;however,the Fuzzy Logic Control has better tacking efficiency under the environment with violent change of solar radiation.
Time-explicit methods for joint economical and geological risk mitigation in production optimization
DEFF Research Database (Denmark)
Christiansen, Lasse Hjuler; Capolei, Andrea; Jørgensen, John Bagterp
2016-01-01
Real-life applications of production optimization face challenges of risks related to unpredictable fluctuations in oil prices and sparse geological data. Consequently, operating companies are reluctant to adopt model-based production optimization into their operations. Conventional production...... of mitigating economical and geological risks. As opposed to conventional strategies that focus on a single long-term objective, TE methods seek to reduce risks and promote returns over the entire reservoir life by optimization of a given ensemble-based geological risk measure over time. By explicit involvement...... of time, economical risks are implicitly addressed by balancing short-term and long-term objectives throughout the reservoir life. Open-loop simulations of a two-phase synthetic reservoir demonstrate that TE methods may significantly improve short-term risk measures such as expected return, standard...
The indication and the point at issue in total body irradiation (TBI)
International Nuclear Information System (INIS)
Kikuchi, Yuzo; Nishino, Shigeo.
1992-01-01
The role of radiation in the cause of interstitial pneumonitis (IP) was analysed here. Also optimal dose fractionation was discussed about total absorbed lung dose, dose rate and fractionation in spect of IP. After all optimal time schedule was recommended 3, 4 and 6 fraction of ≤ 4 Gy of fraction size using conventional and hyperfractionated irradiation. In the end the present condition and the point at issue in the irradiation of blood for prevention GVHD were discussed. (author)
Optimizing Ship Speed to Minimize Total Fuel Consumption with Multiple Time Windows
Directory of Open Access Journals (Sweden)
Jae-Gon Kim
2016-01-01
Full Text Available We study the ship speed optimization problem with the objective of minimizing the total fuel consumption. We consider multiple time windows for each port call as constraints and formulate the problem as a nonlinear mixed integer program. We derive intrinsic properties of the problem and develop an exact algorithm based on the properties. Computational experiments show that the suggested algorithm is very efficient in finding an optimal solution.
Reducing and filtering point clouds with enhanced vector quantization.
Ferrari, Stefano; Ferrigno, Giancarlo; Piuri, Vincenzo; Borghese, N Alberto
2007-01-01
Modern scanners are able to deliver huge quantities of three-dimensional (3-D) data points sampled on an object's surface, in a short time. These data have to be filtered and their cardinality reduced to come up with a mesh manageable at interactive rates. We introduce here a novel procedure to accomplish these two tasks, which is based on an optimized version of soft vector quantization (VQ). The resulting technique has been termed enhanced vector quantization (EVQ) since it introduces several improvements with respect to the classical soft VQ approaches. These are based on computationally expensive iterative optimization; local computation is introduced here, by means of an adequate partitioning of the data space called hyperbox (HB), to reduce the computational time so as to be linear in the number of data points N, saving more than 80% of time in real applications. Moreover, the algorithm can be fully parallelized, thus leading to an implementation that is sublinear in N. The voxel side and the other parameters are automatically determined from data distribution on the basis of the Zador's criterion. This makes the algorithm completely automatic. Because the only parameter to be specified is the compression rate, the procedure is suitable even for nontrained users. Results obtained in reconstructing faces of both humans and puppets as well as artifacts from point clouds publicly available on the web are reported and discussed, in comparison with other methods available in the literature. EVQ has been conceived as a general procedure, suited for VQ applications with large data sets whose data space has relatively low dimensionality.
Lyapunov matrices approach to the parametric optimization of time-delay systems
Directory of Open Access Journals (Sweden)
Duda Józef
2015-09-01
Full Text Available In the paper a Lyapunov matrices approach to the parametric optimization problem of time-delay systems with a P-controller is presented. The value of integral quadratic performance index of quality is equal to the value of Lyapunov functional for the initial function of the time-delay system. The Lyapunov functional is determined by means of the Lyapunov matrix
Ambush frequency should increase over time during optimal predator search for prey.
Alpern, Steve; Fokkink, Robbert; Timmer, Marco; Casas, Jérôme
2011-11-07
We advance and apply the mathematical theory of search games to model the problem faced by a predator searching for prey. Two search modes are available: ambush and cruising search. Some species can adopt either mode, with their choice at a given time traditionally explained in terms of varying habitat and physiological conditions. We present an additional explanation of the observed predator alternation between these search modes, which is based on the dynamical nature of the search game they are playing: the possibility of ambush decreases the propensity of the prey to frequently change locations and thereby renders it more susceptible to the systematic cruising search portion of the strategy. This heuristic explanation is supported by showing that in a new idealized search game where the predator is allowed to ambush or search at any time, and the prey can change locations at intermittent times, optimal predator play requires an alternation (or mixture) over time of ambush and cruise search. Thus, our game is an extension of the well-studied 'Princess and Monster' search game. Search games are zero sum games, where the pay-off is the capture time and neither the Searcher nor the Hider knows the location of the other. We are able to determine the optimal mixture of the search modes when the predator uses a mixture which is constant over time, and also to determine how the mode mixture changes over time when dynamic strategies are allowed (the ambush probability increases over time). In particular, we establish the 'square root law of search predation': the optimal proportion of active search equals the square root of the fraction of the region that has not yet been explored.
Dual time point FDG-PET/CT imaging...; Potential tool for diagnosis of breast cancer
International Nuclear Information System (INIS)
Zytoon, A.A.; Murakami, K.; El-Kholy, M.R.; El-Shorbagy, E.
2008-01-01
Aim: This prospective study was designed to assess the utility of the dual time point imaging technique using 2- [ 18 F]-fluoro-2-deoxy-D-glucose (FDG) positron-emission tomography/computed tomography (PET/CT) to detect primary breast cancer and to determine whether it is useful for the detection of small and non-invasive cancers, as well as cancers in dense breast tissue. Methods: One hundred and eleven patients with newly diagnosed breast cancer underwent two sequential PET/CT examinations (dual time point imaging) for preoperative staging. The maximum standardized uptake value (SUVmax) of FDG was measured from both time points. The percentage change in SUVmax (ΔSUVmax%) between time points 1 (SUVmax1) and 2 (SUVmax2) was calculated. The patients were divided into groups: invasive (n = 82), non invasive (n = 29); large (>10 mm; n = 80), small (≤10 mm; n = 31); tumours in dense breasts (n = 61), and tumours in non-dense breasts (n = 50). The tumour:background (T:B) ratios at both time points were measured and the ΔSUVmax%, ΔT:B% values were calculated. All PET study results were correlated with the histopathology results. Results: Of the 111 cancer lesions, 88 (79.3%) showed an increase and 23 (20.7%) showed either no change [10 (9%)] or a decrease [13 (11.7%)] in the SUVmax over time. Of the 111 contralateral normal breasts, nine (8.1%) showed an increase and 102 (91.9%) showed either no change [17 (15.3%)] or a decrease [85 (76.6%)] in the SUVmax over time. The mean ± SD of SUVmax1, SUVmax2, Δ%SUVmax were 4.9 ± 3.6, 6.0 ± 4.5, and 22.6 ± 13.1% for invasive cancers, 4.1 ± 3.8, 4.4 ± 4.8, and -2.4 ± 18.5% for non-invasive cancers, 2.3 ± 1.9, 2.7 ± 2.3, and 12.9 ± 21.1% for small cancers, 5.6 ± 3.7, 6.8 ± 4.8, and 17.3 ± 17.1% for large cancers, 4.9 ± 3.7, 5.8 ± 4.8, and 15.1 ± 17.6% for cancers in dense breast, and 4.5 ± 3.6, 5.4 ± 4.5, and 17.2 ± 19.2% for cancers in non-dense breast. The receiver-operating characteristic (ROC) analysis
Histogram bin width selection for time-dependent Poisson processes
International Nuclear Information System (INIS)
Koyama, Shinsuke; Shinomoto, Shigeru
2004-01-01
In constructing a time histogram of the event sequences derived from a nonstationary point process, we wish to determine the bin width such that the mean squared error of the histogram from the underlying rate of occurrence is minimized. We find that the optimal bin widths obtained for a doubly stochastic Poisson process and a sinusoidally regulated Poisson process exhibit different scaling relations with respect to the number of sequences, time scale and amplitude of rate modulation, but both diverge under similar parametric conditions. This implies that under these conditions, no determination of the time-dependent rate can be made. We also apply the kernel method to these point processes, and find that the optimal kernels do not exhibit any critical phenomena, unlike the time histogram method
Histogram bin width selection for time-dependent Poisson processes
Energy Technology Data Exchange (ETDEWEB)
Koyama, Shinsuke; Shinomoto, Shigeru [Department of Physics, Graduate School of Science, Kyoto University, Sakyo-ku, Kyoto 606-8502 (Japan)
2004-07-23
In constructing a time histogram of the event sequences derived from a nonstationary point process, we wish to determine the bin width such that the mean squared error of the histogram from the underlying rate of occurrence is minimized. We find that the optimal bin widths obtained for a doubly stochastic Poisson process and a sinusoidally regulated Poisson process exhibit different scaling relations with respect to the number of sequences, time scale and amplitude of rate modulation, but both diverge under similar parametric conditions. This implies that under these conditions, no determination of the time-dependent rate can be made. We also apply the kernel method to these point processes, and find that the optimal kernels do not exhibit any critical phenomena, unlike the time histogram method.
Directory of Open Access Journals (Sweden)
Chia-Chi Wang
2016-03-01
Full Text Available Creatine plays an important role in muscle energy metabolism. Postactivation potentiation (PAP is a phenomenon that can acutely increase muscle power, but it is an individualized process that is influenced by muscle fatigue. This study examined the effects of creatine supplementation on explosive performance and the optimal individual PAP time during a set of complex training bouts. Thirty explosive athletes performed tests of back squat for one repetition maximum (1RM strength and complex training bouts for determining the individual optimal timing of PAP, height and peak power of a counter movement jump before and after the supplementation. Subjects were assigned to a creatine or placebo group and then consumed 20 g of creatine or carboxymethyl cellulose per day for six days. After the supplementation, the 1RM strength in the creatine group significantly increased (p < 0.05. The optimal individual PAP time in the creatine group was also significant earlier than the pre-supplementation and post-supplementation of the placebo group (p < 0.05. There was no significant difference in jump performance between the groups. This study demonstrates that creatine supplementation improves maximal muscle strength and the optimal individual PAP time of complex training but has no effect on explosive performance.
DEFF Research Database (Denmark)
Aanæs, Henrik; Dahl, Anders Lindbjerg; Pedersen, Kim Steenstrup
2012-01-01
on spatial invariance of interest points under changing acquisition parameters by measuring the spatial recall rate. The scope of this paper is to investigate the performance of a number of existing well-established interest point detection methods. Automatic performance evaluation of interest points is hard......Not all interest points are equally interesting. The most valuable interest points lead to optimal performance of the computer vision method in which they are employed. But a measure of this kind will be dependent on the chosen vision application. We propose a more general performance measure based...... position. The LED illumination provides the option for artificially relighting the scene from a range of light directions. This data set has given us the ability to systematically evaluate the performance of a number of interest point detectors. The highlights of the conclusions are that the fixed scale...
The Optimization of Transportation Costs in Logistics Enterprises with Time-Window Constraints
Directory of Open Access Journals (Sweden)
Qingyou Yan
2015-01-01
Full Text Available This paper presents a model for solving a multiobjective vehicle routing problem with soft time-window constraints that specify the earliest and latest arrival times of customers. If a customer is serviced before the earliest specified arrival time, extra inventory costs are incurred. If the customer is serviced after the latest arrival time, penalty costs must be paid. Both the total transportation cost and the required fleet size are minimized in this model, which also accounts for the given capacity limitations of each vehicle. The total transportation cost consists of direct transportation costs, extra inventory costs, and penalty costs. This multiobjective optimization is solved by using a modified genetic algorithm approach. The output of the algorithm is a set of optimal solutions that represent the trade-off between total transportation cost and the fleet size required to service customers. The influential impact of these two factors is analyzed through the use of a case study.
Optimal Two-Impulse Trajectories with Moderate Flight Time for Earth-Moon Missions
Directory of Open Access Journals (Sweden)
Sandro da Silva Fernandes
2012-01-01
describe the motion of the space vehicle: the well-known patched-conic approximation and two versions of the planar circular restricted three-body problem (PCR3BP. In the patched-conic approximation model, the parameters to be optimized are two: initial phase angle of space vehicle and the first velocity impulse. In the PCR3BP models, the parameters to be optimized are four: initial phase angle of space vehicle, flight time, and the first and the second velocity impulses. In all cases, the optimization problem has one degree of freedom and can be solved by means of an algorithm based on gradient method in conjunction with Newton-Raphson method.
Optimal control of LQR for discrete time-varying systems with input delays
Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng
2018-04-01
In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.
MODEL PREDICTIVE CONTROL FOR PHOTOVOLTAIC STATION MAXIMUM POWER POINT TRACKING SYSTEM
Directory of Open Access Journals (Sweden)
I. Elzein
2015-01-01
Full Text Available The purpose of this paper is to present an alternative maximum power point tracking, MPPT, algorithm for a photovoltaic module, PVM, to produce the maximum power, Pmax, using the optimal duty ratio, D, for different types of converters and load matching.We present a state-based approach to the design of the maximum power point tracker for a stand-alone photovoltaic power generation system. The system under consideration consists of a solar array with nonlinear time-varying characteristics, a step-up converter with appropriate filter.The proposed algorithm has the advantages of maximizing the efficiency of the power utilization, can be integrated to other MPPT algorithms without affecting the PVM performance, is excellent for Real-Time applications and is a robust analytical method, different from the traditional MPPT algorithms which are more based on trial and error, or comparisons between present and past states. The procedure to calculate the optimal duty ratio for a buck, boost and buck-boost converters, to transfer the maximum power from a PVM to a load, is presented in the paper. Additionally, the existence and uniqueness of optimal internal impedance, to transfer the maximum power from a photovoltaic module using load matching, is proved.
Optimization Of Scan Range For 3d Point Localization In Statscan Digital Medical Radiology
Directory of Open Access Journals (Sweden)
Jacinta S. Kimuyu
2015-08-01
Full Text Available The emergence of computerized medical imaging in early 1970s which merged with digital technology in the 1980s was celebrated as a major breakthrough in three-dimensional 3D medicine. However a recent South African innovation the high speed scanning Lodox Statscan Critical Digital Radiology modality posed challenges in X-ray photogrammetry due to the systems intricate imaging geometry. The study explored the suitability of the Direct Linear Transformation as a method for the determination of 3D coordinates of targeted points from multiple images acquired with the Statscan X-ray system and optimization of the scan range. This investigation was carried out as a first step towards the development of a method to determine the accurate positions of points on or inside the human body. The major causes of errors in three-dimensional point localization using Statscan images were firstly the X-ray beam divergence and secondly the position of the point targets above the X-ray platform. The experiments carried out with two reference frames showed that point positions could be established with RMS values in the mm range in the middle axis of the X-ray patient platform. This range of acceptable mm accuracies extends about 15 to 20 cm sideways towards the edge of the X-ray table and to about 20 cm above the table surface. Beyond this range accuracy deteriorated significantly reaching RMS values of 30mm to 40 mm. The experiments further showed that the inclusion of control points close to the table edges and more than 20 cm above the table resulted in lower accuracies for the L - parameters of the DLT solution than those derived from points close to the center axis only. As the accuracy of the L - parameters propagates into accuracy of the final coordinates of newly determined points it is essential to restrict the space of the control points to the above described limits. If one adopts the usual approach of surrounding the object by known control points then
Directory of Open Access Journals (Sweden)
Yuan Chen
2011-09-01
Full Text Available This paper proposes a piecewise acceleration-optimal and smooth-jerk trajectory planning method of robot manipulator. The optimal objective function is given by the weighted sum of two terms having opposite effects: the maximal acceleration and the minimal jerk. Some computing techniques are proposed to determine the optimal solution. These techniques take both the time intervals between two interpolation points and the control points of B-spline function as optimal variables, redefine the kinematic constraints as the constraints of optimal variables, and reformulate the objective function in matrix form. The feasibility of the optimal method is illustrated by simulation and experimental results with pan mechanism for cooking robot.
Determining decoupling points in a supply chain networks using NSGA II algorithm
Energy Technology Data Exchange (ETDEWEB)
Ebrahimiarjestan, M.; Wang, G.
2017-07-01
Purpose: In the model, we used the concepts of Lee and Amaral (2002) and Tang and Zhou (2009) and offer a multi-criteria decision-making model that identify the decoupling points to aim to minimize production costs, minimize the product delivery time to customer and maximize their satisfaction. Design/methodology/approach: We encounter with a triple-objective model that meta-heuristic method (NSGA II) is used to solve the model and to identify the Pareto optimal points. The max (min) method was used. Findings: Our results of using NSGA II to find Pareto optimal solutions demonstrate good performance of NSGA II to extract Pareto solutions in proposed model that considers determining of decoupling point in a supply network. Originality/value: So far, several approaches to model the future have been proposed, of course, each of them modeled a part of this concept. This concept has been considered more general in the model that defined in follow. In this model, we face with a multi-criteria decision problem that includes minimization of the production costs and product delivery time to customers as well as customer consistency maximization.
Determining decoupling points in a supply chain networks using NSGA II algorithm
International Nuclear Information System (INIS)
Ebrahimiarjestan, M.; Wang, G.
2017-01-01
Purpose: In the model, we used the concepts of Lee and Amaral (2002) and Tang and Zhou (2009) and offer a multi-criteria decision-making model that identify the decoupling points to aim to minimize production costs, minimize the product delivery time to customer and maximize their satisfaction. Design/methodology/approach: We encounter with a triple-objective model that meta-heuristic method (NSGA II) is used to solve the model and to identify the Pareto optimal points. The max (min) method was used. Findings: Our results of using NSGA II to find Pareto optimal solutions demonstrate good performance of NSGA II to extract Pareto solutions in proposed model that considers determining of decoupling point in a supply network. Originality/value: So far, several approaches to model the future have been proposed, of course, each of them modeled a part of this concept. This concept has been considered more general in the model that defined in follow. In this model, we face with a multi-criteria decision problem that includes minimization of the production costs and product delivery time to customers as well as customer consistency maximization.
Optimal trading quantity integration as a basis for optimal portfolio management
Directory of Open Access Journals (Sweden)
Saša Žiković
2005-06-01
Full Text Available The author in this paper points out the reason behind calculating and using optimal trading quantity in conjunction with Markowitz’s Modern portfolio theory. In the opening part the author presents an example of calculating optimal weights using Markowitz’s Mean-Variance approach, followed by an explanation of basic logic behind optimal trading quantity. The use of optimal trading quantity is not limited to systems with Bernoulli outcome, but can also be used when trading shares, futures, options etc. Optimal trading quantity points out two often-overlooked axioms: (1 a system with negative mathematical expectancy can never be transformed in a system with positive mathematical expectancy, (2 by missing the optimal trading quantity an investor can turn a system with positive expectancy into a negative one. Optimal trading quantity is that quantity which maximizes geometric mean (growth function of a particular system. To determine the optimal trading quantity for simpler systems, with a very limited number of outcomes, a set of Kelly’s formulas is appropriate. In the conclusion the summary of the paper is presented.
DEFF Research Database (Denmark)
Zeymer, Uwe; Montalescot, Gilles; Ardissino, Diego
2016-01-01
The optimal time-point of the initiation of P2Y12 antagonist therapy in patients with non-ST elevation acute coronary syndromes (NTSE-ACS) is still a matter of debate. European guidelines recommend P2Y12 as soon as possible after first medical contact. However, the only trial which compared the two...... strategies did not demonstrate any benefit of pre-treatment with prasugrel before angiography compared to starting therapy after angiography and just prior to percutaneous coronary intervention (PCI). This paper summarizes the results of pharmacodynamic and previous studies, and gives recommendations...
Zeng, Hao; Zhang, Jingrui
2018-04-01
The low-thrust version of the fuel-optimal transfers between periodic orbits with different energies in the vicinity of five libration points is exploited deeply in the Circular Restricted Three-Body Problem. Indirect optimization technique incorporated with constraint gradients is employed to further improve the computational efficiency and accuracy of the algorithm. The required optimal thrust magnitude and direction can be determined to create the bridging trajectory that connects the invariant manifolds. A hierarchical design strategy dividing the constraint set is proposed to seek the optimal solution when the problem cannot be solved directly. Meanwhile, the solution procedure and the value ranges of used variables are summarized. To highlight the effectivity of the transfer scheme and aim at different types of libration point orbits, transfer trajectories between some sample orbits, including Lyapunov orbits, planar orbits, halo orbits, axial orbits, vertical orbits and butterfly orbits for collinear and triangular libration points, are investigated with various time of flight. Numerical results show that the fuel consumption varies from a few kilograms to tens of kilograms, related to the locations and the types of mission orbits as well as the corresponding invariant manifold structures, and indicates that the low-thrust transfers may be a beneficial option for the extended science missions around different libration points.
A point implicit time integration technique for slow transient flow problems
Energy Technology Data Exchange (ETDEWEB)
Kadioglu, Samet Y., E-mail: kadioglu@yildiz.edu.tr [Department of Mathematical Engineering, Yildiz Technical University, 34210 Davutpasa-Esenler, Istanbul (Turkey); Berry, Ray A., E-mail: ray.berry@inl.gov [Idaho National Laboratory, P.O. Box 1625, MS 3840, Idaho Falls, ID 83415 (United States); Martineau, Richard C. [Idaho National Laboratory, P.O. Box 1625, MS 3840, Idaho Falls, ID 83415 (United States)
2015-05-15
Highlights: • This new method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods. • It is unconditionally stable, as a fully implicit method would be. • It exhibits the simplicity of implementation of an explicit method. • It is specifically designed for slow transient flow problems of long duration such as can occur inside nuclear reactor coolant systems. • Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust. - Abstract: We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation of explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very
A point implicit time integration technique for slow transient flow problems
International Nuclear Information System (INIS)
Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.
2015-01-01
Highlights: • This new method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods. • It is unconditionally stable, as a fully implicit method would be. • It exhibits the simplicity of implementation of an explicit method. • It is specifically designed for slow transient flow problems of long duration such as can occur inside nuclear reactor coolant systems. • Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust. - Abstract: We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation of explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very
Hard and soft sub-time-optimal controllers for a mechanical system with uncertain mass
DEFF Research Database (Denmark)
Kulczycki, P.; Wisniewski, Rafal; Kowalski, P.
2004-01-01
An essential limitation in using the classical optimal control has been its limited robustness to modeling inadequacies and perturbations. This paper presents conceptions of two practical control structures based on the time-optimal approach: hard and soft ones. The hard structure is defined...... by parameters selected in accordance with the rules of the statistical decision theory; however, the soft structure allows additionally to eliminate rapid changes in control values. The object is a basic mechanical system, with uncertain (also non-stationary) mass treated as a stochastic process....... The methodology proposed here is of a universal nature and may easily be applied with respect to other elements of uncertainty of time-optimal controlled mechanical systems....
Hard and soft Sub-Time-Optimal Controllers for a Mechanical System with Uncertain Mass
DEFF Research Database (Denmark)
Kulczycki, P.; Wisniewski, Rafal; Kowalski, P.
2005-01-01
An essential limitation in using the classical optimal control has been its limited robustness to modeling inadequacies and perturbations. This paper presents conceptions of two practical control structures based on the time-optimal approach: hard and soft ones. The hard structure is defined...... by parameters selected in accordance with the rules of the statistical decision theory; however, the soft structure allows additionally to eliminate rapid changes in control values. The object is a basic mechanical system, with uncertain (also non-stationary) mass treated as a stochastic process....... The methodology proposed here is of a universal nature and may easily be applied with respect to other elements of uncertainty of time-optimal controlled mechanical systems....
Energy Technology Data Exchange (ETDEWEB)
Wang, Hesheng; Lai, Yinping [Department of Automation,Shanghai Jiao Tong University, Shanghai (China); Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Chen, Weidong, E-mail: wdchen@sjtu.edu.cn [Department of Automation,Shanghai Jiao Tong University, Shanghai (China); Key Laboratory of System Control and Information Processing, Ministry of Education of China (China)
2016-12-15
In this paper, a new optimization model of time optimal trajectory planning with limitation of operating task for the Tokamak inspecting manipulator is designed. The task of this manipulator is to inspect the components of Tokamak, the inspecting velocity of manipulator must be limited in the operating space in order to get the clear pictures. With the limitation of joint velocity, acceleration and jerk, this optimization model can not only get the minimum working time along a specific path, but also ensure the imaging quality of camera through the constraint of inspecting velocity. The upper bound of the scanning speed is not a constant but changes according to the observation distance of camera in real time. The relation between scanning velocity and observation distance is estimated by curve-fitting. Experiment has been carried out to verify the feasibility of optimization model, moreover, the Laplace image sharpness evaluation method is adopted to evaluate the quality of images obtained by the proposed method.
International Nuclear Information System (INIS)
Wang, Hesheng; Lai, Yinping; Chen, Weidong
2016-01-01
In this paper, a new optimization model of time optimal trajectory planning with limitation of operating task for the Tokamak inspecting manipulator is designed. The task of this manipulator is to inspect the components of Tokamak, the inspecting velocity of manipulator must be limited in the operating space in order to get the clear pictures. With the limitation of joint velocity, acceleration and jerk, this optimization model can not only get the minimum working time along a specific path, but also ensure the imaging quality of camera through the constraint of inspecting velocity. The upper bound of the scanning speed is not a constant but changes according to the observation distance of camera in real time. The relation between scanning velocity and observation distance is estimated by curve-fitting. Experiment has been carried out to verify the feasibility of optimization model, moreover, the Laplace image sharpness evaluation method is adopted to evaluate the quality of images obtained by the proposed method.
New Bounds of Ostrowski–Gruss Type Inequality for (k + 1 Points on Time Scales
Directory of Open Access Journals (Sweden)
Eze R. Nwaeze
2017-11-01
Full Text Available The aim of this paper is to present three new bounds of the Ostrowski--Gr\\"uss type inequality for points $x_0,x_1,x_2,\\cdots,x_k$ on time scales. Our results generalize result of Ng\\^o and Liu, and extend results of Ujevi\\'c to time scales with $(k+1$ points. We apply our results to the continuous, discrete, and quantum calculus to obtain many new interesting inequalities. An example is also considered. The estimates obtained in this paper will be very useful in numerical integration especially for the continuous case.
Energy Technology Data Exchange (ETDEWEB)
Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 (United States); Cheung, Yam [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas, 75390 and Department of Radiation Oncology, University of Maryland, College Park, Maryland 20742 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)
2016-05-15
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-05-01
To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have
International Nuclear Information System (INIS)
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-01-01
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced
Optimization of the Phase Advance Between RHIC Interaction Points
Tomas, Rogelio
2005-01-01
We consider the scenario of having two identical Interaction Points (IPs) in the Relativistic Heavy Ion Collider (RHIC). The strengths of beam-beam resonances strongly depend on the phase advance between these two IPs and therefore certain phase advances could improve beam lifetime and luminosity. We compute the dynamic aperture as function of the phase advance between these IPs to find the optimum settings. The beam-beam interaction is treated in the weak-strong approximation and a complete non-linear model of the lattice is used. For the current RHIC proton working point (0.69,0.685) the design lattice is found to have the optimum phase advance. However this is not the case for other working points.
Directory of Open Access Journals (Sweden)
Elahe Fallah Mehdipour
2012-12-01
Full Text Available Optimal operation of multipurpose reservoirs is one of the complex and sometimes nonlinear problems in the field of multi-objective optimization. Evolutionary algorithms are optimization tools that search decision space using simulation of natural biological evolution and present a set of points as the optimum solutions of problem. In this research, application of multi-objective particle swarm optimization (MOPSO in optimal operation of Bazoft reservoir with different objectives, including generating hydropower energy, supplying downstream demands (drinking, industry and agriculture, recreation and flood control have been considered. In this regard, solution sets of the MOPSO algorithm in bi-combination of objectives and compromise programming (CP using different weighting and power coefficients have been first compared that the MOPSO algorithm in all combinations of objectives is more capable than the CP to find solution with appropriate distribution and these solutions have dominated the CP solutions. Then, ending points of solution set from the MOPSO algorithm and nonlinear programming (NLP results have been compared. Results showed that the MOPSO algorithm with 0.3 percent difference from the NLP results has more capability to present optimum solutions in the ending points of solution set.
Suarez, Hernan; Zhang, Yan R.
2015-05-01
New radar applications need to perform complex algorithms and process large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression for real-time transceiver optimization are presented, they are based on a System-on-Chip architecture for Xilinx devices. This study also evaluates the performance of dedicated coprocessor as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through the high performance AXI buses, to perform floating-point operations, control the processing blocks, and communicate with external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band tested together with a low-cost channel emulator for different types of waveforms.
Design Optimization of Cyber-Physical Distributed Systems using IEEE Time-sensitive Networks (TSN)
DEFF Research Database (Denmark)
Pop, Paul; Lander Raagaard, Michael; Craciunas, Silviu S.
2016-01-01
to the optimization of distributed cyber-physical systems using real-time Ethernet for communication. Then, we formulate two novel optimization problems related to the scheduling and routing of TT and AVB traffic in TSN. Thus, we consider that we know the topology of the network as well as the set of TT and AVB flows......In this paper we are interested in safety-critical real-time applications implemented on distributed architectures supporting the Time-SensitiveNetworking (TSN) standard. The ongoing standardization of TSN is an IEEE effort to bring deterministic real-time capabilities into the IEEE 802.1 Ethernet...... standard supporting safety-critical systems and guaranteed Quality-of-Service. TSN will support Time-Triggered (TT) communication based on schedule tables, Audio-Video-Bridging (AVB) flows with bounded end-to-end latency as well as Best-Effort messages. We first present a survey of research related...
Optimal timing of coronary invasive strategy in non-ST-segment elevation acute coronary syndromes
DEFF Research Database (Denmark)
Navarese, Eliano P; Gurbel, Paul A; Andreotti, Felicita
2013-01-01
The optimal timing of coronary intervention in patients with non-ST-segment elevation acute coronary syndromes (NSTE-ACSs) is a matter of debate. Conflicting results among published studies partly relate to different risk profiles of the studied populations.......The optimal timing of coronary intervention in patients with non-ST-segment elevation acute coronary syndromes (NSTE-ACSs) is a matter of debate. Conflicting results among published studies partly relate to different risk profiles of the studied populations....
Optimized positioning of autonomous surgical lamps
Teuber, Jörn; Weller, Rene; Kikinis, Ron; Oldhafer, Karl-Jürgen; Lipp, Michael J.; Zachmann, Gabriel
2017-03-01
We consider the problem of finding automatically optimal positions of surgical lamps throughout the whole surgical procedure, where we assume that future lamps could be robotized. We propose a two-tiered optimization technique for the real-time autonomous positioning of those robotized surgical lamps. Typically, finding optimal positions for surgical lamps is a multi-dimensional problem with several, in part conflicting, objectives, such as optimal lighting conditions at every point in time while minimizing the movement of the lamps in order to avoid distractions of the surgeon. Consequently, we use multi-objective optimization (MOO) to find optimal positions in real-time during the entire surgery. Due to the conflicting objectives, there is usually not a single optimal solution for such kinds of problems, but a set of solutions that realizes a Pareto-front. When our algorithm selects a solution from this set it additionally has to consider the individual preferences of the surgeon. This is a highly non-trivial task because the relationship between the solution and the parameters is not obvious. We have developed a novel meta-optimization that considers exactly this challenge. It delivers an easy to understand set of presets for the parameters and allows a balance between the lamp movement and lamp obstruction. This metaoptimization can be pre-computed for different kinds of operations and it then used by our online optimization for the selection of the appropriate Pareto solution. Both optimization approaches use data obtained by a depth camera that captures the surgical site but also the environment around the operating table. We have evaluated our algorithms with data recorded during a real open abdominal surgery. It is available for use for scientific purposes. The results show that our meta-optimization produces viable parameter sets for different parts of an intervention even when trained on a small portion of it.
Optimal time interval for induction of immunologic adaptive response
International Nuclear Information System (INIS)
Ju Guizhi; Song Chunhua; Liu Shuzheng
1994-01-01
The optimal time interval between prior dose (D1) and challenge dose (D2) for the induction of immunologic adaptive response was investigated. Kunming mice were exposed to 75 mGy X-rays at a dose rate of 12.5 mGy/min. 3, 6, 12, 24 or 60 h after the prior irradiation the mice were challenged with a dose of 1.5 Gy at a dose rate of 0.33 Gy/min. 18h after D2, the mice were sacrificed for examination of immunological parameters. The results showed that with an interval of 6 h between D1 and D2, the adaptive response of the reaction of splenocytes to LPS was induced, and with an interval of 12 h the adaptive responses of spontaneous incorporation of 3 H-TdR into thymocytes and the reaction of splenocytes to Con A and LPS were induced with 75 mGy prior irradiation. The data suggested that the optimal time intervals between D1 and D2 for the induction of immunologic adaptive response were 6 h and 12 h with a D1 of 75 mGy and a D2 of 1.5 Gy. The mechanism of immunologic adaptation following low dose radiation is discussed
Zakary, Omar; Rachik, Mostafa; Elmouki, Ilias
2017-08-01
First, we devise in this paper, a multi-regions discrete-time model which describes the spatial-temporal spread of an epidemic which starts from one region and enters to regions which are connected with their neighbors by any kind of anthropological movement. We suppose homogeneous Susceptible-Infected-Removed (SIR) populations, and we consider in our simulations, a grid of colored cells, which represents the whole domain affected by the epidemic while each cell can represent a sub-domain or region. Second, in order to minimize the number of infected individuals in one region, we propose an optimal control approach based on a travel-blocking vicinity strategy which aims to control only one cell by restricting movements of infected people coming from all neighboring cells. Thus, we show the influence of the optimal control approach on the controlled cell. We should also note that the cellular modeling approach we propose here, can also describes infection dynamics of regions which are not necessarily attached one to an other, even if no empty space can be viewed between cells. The theoretical method we follow for the characterization of the travel-locking optimal controls, is based on a discrete version of Pontryagin's maximum principle while the numerical approach applied to the multi-points boundary value problems we obtain here, is based on discrete progressive-regressive iterative schemes. We illustrate our modeling and control approaches by giving an example of 100 regions.
Goebel, Juliane; Nensa, Felix; Bomas, Bettina; Schemuth, Haemi P; Maderwald, Stefan; Gratz, Marcel; Quick, Harald H; Schlosser, Thomas; Nassenstein, Kai
2016-12-01
Improved real-time cardiac magnetic resonance (CMR) sequences have currently been introduced, but so far only limited practical experience exists. This study aimed at image reconstruction optimization and clinical validation of a new highly accelerated real-time cine SPARSE-SENSE sequence. Left ventricular (LV) short-axis stacks of a real-time free-breathing SPARSE-SENSE sequence with high spatiotemporal resolution and of a standard segmented cine SSFP sequence were acquired at 1.5 T in 11 volunteers and 15 patients. To determine the optimal iterations, all volunteers' SPARSE-SENSE images were reconstructed using 10-200 iterations, and contrast ratios, image entropies, and reconstruction times were assessed. Subsequently, the patients' SPARSE-SENSE images were reconstructed with the clinically optimal iterations. LV volumetric values were evaluated and compared between both sequences. Sufficient image quality and acceptable reconstruction times were achieved when using 80 iterations. Bland-Altman plots and Passing-Bablok regression showed good agreement for all volumetric parameters. 80 iterations are recommended for iterative SPARSE-SENSE image reconstruction in clinical routine. Real-time cine SPARSE-SENSE yielded comparable volumetric results as the current standard SSFP sequence. Due to its intrinsic low image acquisition times, real-time cine SPARSE-SENSE imaging with iterative image reconstruction seems to be an attractive alternative for LV function analysis. • A highly accelerated real-time CMR sequence using SPARSE-SENSE was evaluated. • SPARSE-SENSE allows free breathing in real-time cardiac cine imaging. • For clinically optimal SPARSE-SENSE image reconstruction, 80 iterations are recommended. • Real-time SPARSE-SENSE imaging yielded comparable volumetric results as the reference SSFP sequence. • The fast SPARSE-SENSE sequence is an attractive alternative to standard SSFP sequences.
Optimal Consumption and Investment under Time-Varying Relative Risk Aversion
DEFF Research Database (Denmark)
Steffensen, Mogens
2011-01-01
We consider the continuous time consumption-investment problem originally formalized and solved by Merton in case of constant relative risk aversion. We present a complete solution for the case where relative risk aversion with respect to consumption varies with time, having in mind an investor...... with age-dependent risk aversion. This provides a new motivation for life-cycle investment rules. We study the optimal consumption and investment rules, in particular in the case where the relative risk aversion with respect to consumption is increasing with age....
Directory of Open Access Journals (Sweden)
Koichi Nakade
2017-01-01
Full Text Available In a manufacturing and inventory system, information on production and order lead time helps consumers’ decision whether they receive finished products or not by considering their own impatience on waiting time. In Savaşaneril et al. (2010, the optimal dynamic lead time quotation policy in a one-stage production and inventory system with a base stock policy for maximizing the system’s profit and its properties are discussed. In this system, each arriving customer decides whether he/she enters the system based on the quoted lead time informed by the system. On the other hand, the customer’s utility may be small under the optimal quoted lead time policy because the actual lead time may be longer than the quoted lead time. We use a utility function with respect to benefit of receiving products and waiting time and propose several kinds of heuristic lead time quotation policies. These are compared with optimal policies with respect to both profits and customer’s utilities. Through numerical examples some kinds of heuristic policies have better expected utilities of customers than the optimal quoted lead time policy maximizing system’s profits.
EBT time-dependent point model code: description and user's guide
International Nuclear Information System (INIS)
Roberts, J.F.; Uckan, N.A.
1977-07-01
A D-T time-dependent point model has been developed to assess the energy balance in an EBT reactor plasma. Flexibility is retained in the model to permit more recent data to be incorporated as they become available from the theoretical and experimental studies. This report includes the physics models involved, the program logic, and a description of the variables and routines used. All the files necessary for execution are listed, and the code, including a post-execution plotting routine, is discussed
Heinkenschloss, Matthias
2005-01-01
We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.
Hajipour, Mojtaba; Jajarmi, Amin
2018-02-01
Using the Pontryagin's maximum principle for a time-delayed optimal control problem results in a system of coupled two-point boundary-value problems (BVPs) involving both time-advance and time-delay arguments. The analytical solution of this advance-delay two-point BVP is extremely difficult, if not impossible. This paper provides a discrete general form of the numerical solution for the derived advance-delay system by applying a finite difference θ-method. This method is also implemented for the infinite-time horizon time-delayed optimal control problems by using a piecewise version of the θ-method. A matrix formulation and the error analysis of the suggested technique are provided. The new scheme is accurate, fast and very effective for the optimal control of linear and nonlinear time-delay systems. Various types of finite- and infinite-time horizon problems are included to demonstrate the accuracy, validity and applicability of the new technique.
Optimized quantum sensing with a single electron spin using real-time adaptive measurements
Bonato, C.; Blok, M. S.; Dinani, H. T.; Berry, D. W.; Markham, M. L.; Twitchen, D. J.; Hanson, R.
2016-03-01
Quantum sensors based on single solid-state spins promise a unique combination of sensitivity and spatial resolution. The key challenge in sensing is to achieve minimum estimation uncertainty within a given time and with high dynamic range. Adaptive strategies have been proposed to achieve optimal performance, but their implementation in solid-state systems has been hindered by the demanding experimental requirements. Here, we realize adaptive d.c. sensing by combining single-shot readout of an electron spin in diamond with fast feedback. By adapting the spin readout basis in real time based on previous outcomes, we demonstrate a sensitivity in Ramsey interferometry surpassing the standard measurement limit. Furthermore, we find by simulations and experiments that adaptive protocols offer a distinctive advantage over the best known non-adaptive protocols when overhead and limited estimation time are taken into account. Using an optimized adaptive protocol we achieve a magnetic field sensitivity of 6.1 ± 1.7 nT Hz-1/2 over a wide range of 1.78 mT. These results open up a new class of experiments for solid-state sensors in which real-time knowledge of the measurement history is exploited to obtain optimal performance.
Integrated solar energy system optimization
Young, S. K.
1982-11-01
The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.