WorldWideScience

Sample records for optimizing application mapping

  1. Optimization with Multivalued Mappings Theory, Applications and Algorithms

    CERN Document Server

    Dempe, Stephan

    2006-01-01

    Focussing on optimization problems involving multivalued mappings in constraints or as the objective function, this book includes the formulation of optimality conditions using different kinds of generalized derivatives for set-valued mappings, among the other related topics.

  2. Interior point algorithms: guaranteed optimality for fluence map optimization in IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Aleman, Dionne M [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, ON M5S 3G8 (Canada); Glaser, Daniel [Division of Optimization and Systems Theory, Department of Mathematics, Royal Institute of Technology, Stockholm (Sweden); Romeijn, H Edwin [Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109-2117 (United States); Dempsey, James F, E-mail: aleman@mie.utoronto.c, E-mail: romeijn@umich.ed, E-mail: jfdempsey@viewray.co [ViewRay, Inc. 2 Thermo Fisher Way, Village of Oakwood, OH 44146 (United States)

    2010-09-21

    One of the most widely studied problems of the intensity-modulated radiation therapy (IMRT) treatment planning problem is the fluence map optimization (FMO) problem, the problem of determining the amount of radiation intensity, or fluence, of each beamlet in each beam. For a given set of beams, the fluences of the beamlets can drastically affect the quality of the treatment plan, and thus it is critical to obtain good fluence maps for radiation delivery. Although several approaches have been shown to yield good solutions to the FMO problem, these solutions are not guaranteed to be optimal. This shortcoming can be attributed to either optimization model complexity or properties of the algorithms used to solve the optimization model. We present a convex FMO formulation and an interior point algorithm that yields an optimal treatment plan in seconds, making it a viable option for clinical applications.

  3. Interior point algorithms: guaranteed optimality for fluence map optimization in IMRT

    International Nuclear Information System (INIS)

    Aleman, Dionne M; Glaser, Daniel; Romeijn, H Edwin; Dempsey, James F

    2010-01-01

    One of the most widely studied problems of the intensity-modulated radiation therapy (IMRT) treatment planning problem is the fluence map optimization (FMO) problem, the problem of determining the amount of radiation intensity, or fluence, of each beamlet in each beam. For a given set of beams, the fluences of the beamlets can drastically affect the quality of the treatment plan, and thus it is critical to obtain good fluence maps for radiation delivery. Although several approaches have been shown to yield good solutions to the FMO problem, these solutions are not guaranteed to be optimal. This shortcoming can be attributed to either optimization model complexity or properties of the algorithms used to solve the optimization model. We present a convex FMO formulation and an interior point algorithm that yields an optimal treatment plan in seconds, making it a viable option for clinical applications.

  4. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  5. Methodology optimizing SAGE library tag-to-gene mapping: application to Leishmania

    Directory of Open Access Journals (Sweden)

    Smandi Sondos

    2012-01-01

    Full Text Available Abstract Background Leishmaniasis are widespread parasitic-diseases with an urgent need for more active and less toxic drugs and for effective vaccines. Understanding the biology of the parasite especially in the context of host parasite interaction is a crucial step towards such improvements in therapy and control. Several experimental approaches including SAGE (Serial analysis of gene expression have been developed in order to investigate the parasite transcriptome organisation and plasticity. Usual SAGE tag-to-gene mapping techniques are inadequate because almost all tags are normally located in the 3'-UTR outside the CDS, whereas most information available for Leishmania transcripts is restricted to the CDS predictions. The aim of this work is to optimize a SAGE libraries tag-to-gene mapping technique and to show how this development improves the understanding of Leishmania transcriptome. Findings The in silico method implemented herein was based on mapping the tags to Leishmania genome using BLAST then mapping the tags to their gene using a data-driven probability distribution. This optimized tag-to-gene mappings improved the knowledge of Leishmania genome structure and transcription. It allowed analyzing the expression of a maximal number of Leishmania genes, the delimitation of the 3' UTR of 478 genes and the identification of biological processes that are differentially modulated during the promastigote to amastigote differentiation. Conclusion The developed method optimizes the assignment of SAGE tags in trypanosomatidae genomes as well as in any genome having polycistronic transcription and small intergenic regions.

  6. K-maps: a vehicle to an optimal solution in combinational logic ...

    African Journals Online (AJOL)

    K-maps: a vehicle to an optimal solution in combinational logic design problems using digital multiplexers. ... Abstract. Application of Karnaugh maps (K-Maps) for the design of combinational logic circuits and sequential logic circuits is a subject that has been widely discussed. However, the use of K-Maps in the design of ...

  7. Optimizing Travel Time to Outpatient Interventional Radiology Procedures in a Multi-Site Hospital System Using a Google Maps Application.

    Science.gov (United States)

    Mandel, Jacob E; Morel-Ovalle, Louis; Boas, Franz E; Ziv, Etay; Yarmohammadi, Hooman; Deipolyi, Amy; Mohabir, Heeralall R; Erinjeri, Joseph P

    2018-02-20

    The purpose of this study is to determine whether a custom Google Maps application can optimize site selection when scheduling outpatient interventional radiology (IR) procedures within a multi-site hospital system. The Google Maps for Business Application Programming Interface (API) was used to develop an internal web application that uses real-time traffic data to determine estimated travel time (ETT; minutes) and estimated travel distance (ETD; miles) from a patient's home to each a nearby IR facility in our hospital system. Hypothetical patient home addresses based on the 33 cities comprising our institution's catchment area were used to determine the optimal IR site for hypothetical patients traveling from each city based on real-time traffic conditions. For 10/33 (30%) cities, there was discordance between the optimal IR site based on ETT and the optimal IR site based on ETD at non-rush hour time or rush hour time. By choosing to travel to an IR site based on ETT rather than ETD, patients from discordant cities were predicted to save an average of 7.29 min during non-rush hour (p = 0.03), and 28.80 min during rush hour (p travel time when more than one location providing IR procedures is available within the same hospital system.

  8. Customised City Maps in Mobile Applications for Senior Citizens.

    Science.gov (United States)

    Reins, Frank; Berker, Frank; Heck, Helmut

    2017-01-01

    Map services should be used in mobile applications for senior citizens. Do the commonly used map services meet the needs of elderly people? - Exemplarily, the contrast ratios of common maps in comparison to an optimized custom rendered map are examined in the paper.

  9. Application of mapping crossover genetic algorithm in nuclear power equipment optimization design

    International Nuclear Information System (INIS)

    Li Guijiang; Yan Changqi; Wang Jianjun; Liu Chengyang

    2013-01-01

    Genetic algorithm (GA) has been widely applied in nuclear engineering. An improved method, named the mapping crossover genetic algorithm (MCGA), was developed aiming at improving the shortcomings of traditional genetic algorithm (TGA). The optimal results of benchmark problems show that MCGA has better optimizing performance than TGA. MCGA was applied to the reactor coolant pump optimization design. (authors)

  10. Review of the Space Mapping Approach to Engineering Optimization and Modeling

    DEFF Research Database (Denmark)

    Bakr, M. H.; Bandler, J. W.; Madsen, Kaj

    2000-01-01

    We review the Space Mapping (SM) concept and its applications in engineering optimization and modeling. The aim of SM is to avoid computationally expensive calculations encountered in simulating an engineering system. The existence of less accurate but fast physically-based models is exploited. S......-based Modeling (SMM). These include Space Derivative Mapping (SDM), Generalized Space Mapping (GSM) and Space Mapping-based Neuromodeling (SMN). Finally, we address open points for research and future development....

  11. Texture mapping via optimal mass transport.

    Science.gov (United States)

    Dominitz, Ayelet; Tannenbaum, Allen

    2010-01-01

    In this paper, we present a novel method for texture mapping of closed surfaces. Our method is based on the technique of optimal mass transport (also known as the "earth-mover's metric"). This is a classical problem that concerns determining the optimal way, in the sense of minimal transportation cost, of moving a pile of soil from one site to another. In our context, the resulting mapping is area preserving and minimizes angle distortion in the optimal mass sense. Indeed, we first begin with an angle-preserving mapping (which may greatly distort area) and then correct it using the mass transport procedure derived via a certain gradient flow. In order to obtain fast convergence to the optimal mapping, we incorporate a multiresolution scheme into our flow. We also use ideas from discrete exterior calculus in our computations.

  12. Data Assimilation with Optimal Maps

    Science.gov (United States)

    El Moselhy, T.; Marzouk, Y.

    2012-12-01

    Tarek El Moselhy and Youssef Marzouk Massachusetts Institute of Technology We present a new approach to Bayesian inference that entirely avoids Markov chain simulation and sequential importance resampling, by constructing a map that pushes forward the prior measure to the posterior measure. Existence and uniqueness of a suitable measure-preserving map is established by formulating the problem in the context of optimal transport theory. The map is written as a multivariate polynomial expansion and computed efficiently through the solution of a stochastic optimization problem. While our previous work [1] focused on static Bayesian inference problems, we now extend the map-based approach to sequential data assimilation, i.e., nonlinear filtering and smoothing. One scheme involves pushing forward a fixed reference measure to each filtered state distribution, while an alternative scheme computes maps that push forward the filtering distribution from one stage to the other. We compare the performance of these schemes and extend the former to problems of smoothing, using a map implementation of the forward-backward smoothing formula. Advantages of a map-based representation of the filtering and smoothing distributions include analytical expressions for posterior moments and the ability to generate arbitrary numbers of independent uniformly-weighted posterior samples without additional evaluations of the dynamical model. Perhaps the main advantage, however, is that the map approach inherently avoids issues of sample impoverishment, since it explicitly represents the posterior as the pushforward of a reference measure, rather than with a particular set of samples. The computational complexity of our algorithm is comparable to state-of-the-art particle filters. Moreover, the accuracy of the approach is controlled via the convergence criterion of the underlying optimization problem. We demonstrate the efficiency and accuracy of the map approach via data assimilation in

  13. Optimizing the MapReduce Framework on Intel Xeon Phi Coprocessor

    OpenAIRE

    Lu, Mian; Zhang, Lei; Huynh, Huynh Phung; Ong, Zhongliang; Liang, Yun; He, Bingsheng; Goh, Rick Siow Mong; Huynh, Richard

    2013-01-01

    With the ease-of-programming, flexibility and yet efficiency, MapReduce has become one of the most popular frameworks for building big-data applications. MapReduce was originally designed for distributed-computing, and has been extended to various architectures, e,g, multi-core CPUs, GPUs and FPGAs. In this work, we focus on optimizing the MapReduce framework on Xeon Phi, which is the latest product released by Intel based on the Many Integrated Core Architecture. To the best of our knowledge...

  14. Optimization of Antennas using a Hybrid Genetic-Algorithm Space-Mapping Algorithm

    DEFF Research Database (Denmark)

    Pantoja, M.F.; Bretones, A.R.; Meincke, Peter

    2006-01-01

    A hybrid global-local optimization technique for the design of antennas is presented. It consists of the subsequent application of a Genetic Algorithm (GA) that employs coarse models in the simulations and a space mapping (SM) that refines the solution found in the previous stage. The technique...

  15. Space mapping optimization algorithms for engineering design

    DEFF Research Database (Denmark)

    Koziel, Slawomir; Bandler, John W.; Madsen, Kaj

    2006-01-01

    A simple, efficient optimization algorithm based on space mapping (SM) is presented. It utilizes input SM to reduce the misalignment between the coarse and fine models of the optimized object over a region of interest, and output space mapping (OSM) to ensure matching of response and first...... to a benchmark problem. In comparison with SMIS, the models presented are simple and have a small number of parameters that need to be extracted. The new algorithm is applied to the optimization of coupled-line band-pass filter....

  16. A Hybrid Genetic-Algorithm Space-Mapping Tool for the Optimization of Antennas

    DEFF Research Database (Denmark)

    Pantoja, Mario Fernández; Meincke, Peter; Bretones, Amelia Rubio

    2007-01-01

    A hybrid global-local optimization technique for the design of antennas is presented. It consists of the subsequent application of a genetic algorithm (GA) that employs coarse models in the simulations and a space mapping (SM) that refines the solution found in the previous stage. The technique...

  17. Glowworm swarm optimization theory, algorithms, and applications

    CERN Document Server

    Kaipa, Krishnanand N

    2017-01-01

    This book provides a comprehensive account of the glowworm swarm optimization (GSO) algorithm, including details of the underlying ideas, theoretical foundations, algorithm development, various applications, and MATLAB programs for the basic GSO algorithm. It also discusses several research problems at different levels of sophistication that can be attempted by interested researchers. The generality of the GSO algorithm is evident in its application to diverse problems ranging from optimization to robotics. Examples include computation of multiple optima, annual crop planning, cooperative exploration, distributed search, multiple source localization, contaminant boundary mapping, wireless sensor networks, clustering, knapsack, numerical integration, solving fixed point equations, solving systems of nonlinear equations, and engineering design optimization. The book is a valuable resource for researchers as well as graduate and undergraduate students in the area of swarm intelligence and computational intellige...

  18. An optimization method of VON mapping for energy efficiency and routing in elastic optical networks

    Science.gov (United States)

    Liu, Huanlin; Xiong, Cuilian; Chen, Yong; Li, Changping; Chen, Derun

    2018-03-01

    To improve resources utilization efficiency, network virtualization in elastic optical networks has been developed by sharing the same physical network for difference users and applications. In the process of virtual nodes mapping, longer paths between physical nodes will consume more spectrum resources and energy. To address the problem, we propose a virtual optical network mapping algorithm called genetic multi-objective optimize virtual optical network mapping algorithm (GM-OVONM-AL), which jointly optimizes the energy consumption and spectrum resources consumption in the process of virtual optical network mapping. Firstly, a vector function is proposed to balance the energy consumption and spectrum resources by optimizing population classification and crowding distance sorting. Then, an adaptive crossover operator based on hierarchical comparison is proposed to improve search ability and convergence speed. In addition, the principle of the survival of the fittest is introduced to select better individual according to the relationship of domination rank. Compared with the spectrum consecutiveness-opaque virtual optical network mapping-algorithm and baseline-opaque virtual optical network mapping algorithm, simulation results show the proposed GM-OVONM-AL can achieve the lowest bandwidth blocking probability and save the energy consumption.

  19. Space Mapping With Adaptive Response Correction for Microwave Design Optimization

    DEFF Research Database (Denmark)

    Koziel, S.; Bandler, J.W.; Madsen, Kaj

    2009-01-01

    at which the term was calculated, as in the surrogate model optimization process. In this paper, an adaptive response correction scheme is presented to work in conjunction with space-mapping optimization algorithms. This technique is designed to alleviate the difficulties of the standard output space......Output space mapping is a technique introduced to enhance the robustness of the space-mapping optimization process in case the space-mapped coarse model cannot provide sufficient matching with the fine model. The technique often works very well; however, in some cases it fails. Especially...

  20. Set-valued optimization an introduction with applications

    CERN Document Server

    Khan, Akhtar A; Zalinescu, Constantin

    2014-01-01

    Set-valued optimization is a vibrant and expanding branch of mathematics that deals with optimization problems where the objective map and/or the constraints maps are set-valued maps acting between certain spaces. Since set-valued maps subsumes single valued maps, set-valued optimization provides an important extension and unification of the scalar as well as the vector optimization problems. Therefore this relatively new discipline has justifiably attracted a great deal of attention in recent years. This book presents, in a unified framework, basic properties on ordering relations, solution c

  1. Optimization of friction stir welding using space mapping and manifold mapping-an initial study of thermal aspects

    DEFF Research Database (Denmark)

    Larsen, Anders Astrup; Bendsøe, Martin P.; Hattel, Jesper Henri

    2009-01-01

    The aim of this paper is to optimize a thermal model of a friction stir welding process by finding optimal welding parameters. The optimization is performed using space mapping and manifold mapping techniques in which a coarse model is used along with the fine model to be optimized. Different...

  2. MaMR: High-performance MapReduce programming model for material cloud applications

    Science.gov (United States)

    Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng

    2017-02-01

    With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.

  3. Network Partitioning Domain Knowledge Multiobjective Application Mapping for Large-Scale Network-on-Chip

    Directory of Open Access Journals (Sweden)

    Yin Zhen Tei

    2014-01-01

    Full Text Available This paper proposes a multiobjective application mapping technique targeted for large-scale network-on-chip (NoC. As the number of intellectual property (IP cores in multiprocessor system-on-chip (MPSoC increases, NoC application mapping to find optimum core-to-topology mapping becomes more challenging. Besides, the conflicting cost and performance trade-off makes multiobjective application mapping techniques even more complex. This paper proposes an application mapping technique that incorporates domain knowledge into genetic algorithm (GA. The initial population of GA is initialized with network partitioning (NP while the crossover operator is guided with knowledge on communication demands. NP reduces the large-scale application mapping complexity and provides GA with a potential mapping search space. The proposed genetic operator is compared with state-of-the-art genetic operators in terms of solution quality. In this work, multiobjective optimization of energy and thermal-balance is considered. Through simulation, knowledge-based initial mapping shows significant improvement in Pareto front compared to random initial mapping that is widely used. The proposed knowledge-based crossover also shows better Pareto front compared to state-of-the-art knowledge-based crossover.

  4. Empty tracks optimization based on Z-Map model

    Science.gov (United States)

    Liu, Le; Yan, Guangrong; Wang, Zaijun; Zang, Genao

    2017-12-01

    For parts with many features, there are more empty tracks during machining. If these tracks are not optimized, the machining efficiency will be seriously affected. In this paper, the characteristics of the empty tracks are studied in detail. Combining with the existing optimization algorithm, a new tracks optimization method based on Z-Map model is proposed. In this method, the tool tracks are divided into the unit processing section, and then the Z-Map model simulation technique is used to analyze the order constraint between the unit segments. The empty stroke optimization problem is transformed into the TSP with sequential constraints, and then through the genetic algorithm solves the established TSP problem. This kind of optimization method can not only optimize the simple structural parts, but also optimize the complex structural parts, so as to effectively plan the empty tracks and greatly improve the processing efficiency.

  5. Chaos optimization algorithms based on chaotic maps with different probability distribution and search speed for global optimization

    Science.gov (United States)

    Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei

    2014-04-01

    Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.

  6. Application of Contraction Mappings to the Control of Nonlinear Systems. Ph.D. Thesis

    Science.gov (United States)

    Killingsworth, W. R., Jr.

    1972-01-01

    The theoretical and applied aspects of successive approximation techniques are considered for the determination of controls for nonlinear dynamical systems. Particular emphasis is placed upon the methods of contraction mappings and modified contraction mappings. It is shown that application of the Pontryagin principle to the optimal nonlinear regulator problem results in necessary conditions for optimality in the form of a two point boundary value problem (TPBVP). The TPBVP is represented by an operator equation and functional analytic results on the iterative solution of operator equations are applied. The general convergence theorems are translated and applied to those operators arising from the optimal regulation of nonlinear systems. It is shown that simply structured matrices and similarity transformations may be used to facilitate the calculation of the matrix Green functions and the evaluation of the convergence criteria. A controllability theory based on the integral representation of TPBVP's, the implicit function theorem, and contraction mappings is developed for nonlinear dynamical systems. Contraction mappings are theoretically and practically applied to a nonlinear control problem with bounded input control and the Lipschitz norm is used to prove convergence for the nondifferentiable operator. A dynamic model representing community drug usage is developed and the contraction mappings method is used to study the optimal regulation of the nonlinear system.

  7. Optimal contact definition for reconstruction of Contact Maps

    Directory of Open Access Journals (Sweden)

    Stehr Henning

    2010-05-01

    Full Text Available Abstract Background Contact maps have been extensively used as a simplified representation of protein structures. They capture most important features of a protein's fold, being preferred by a number of researchers for the description and study of protein structures. Inspired by the model's simplicity many groups have dedicated a considerable amount of effort towards contact prediction as a proxy for protein structure prediction. However a contact map's biological interest is subject to the availability of reliable methods for the 3-dimensional reconstruction of the structure. Results We use an implementation of the well-known distance geometry protocol to build realistic protein 3-dimensional models from contact maps, performing an extensive exploration of many of the parameters involved in the reconstruction process. We try to address the questions: a to what accuracy does a contact map represent its corresponding 3D structure, b what is the best contact map representation with regard to reconstructability and c what is the effect of partial or inaccurate contact information on the 3D structure recovery. Our results suggest that contact maps derived from the application of a distance cutoff of 9 to 11Å around the Cβ atoms constitute the most accurate representation of the 3D structure. The reconstruction process does not provide a single solution to the problem but rather an ensemble of conformations that are within 2Å RMSD of the crystal structure and with lower values for the pairwise average ensemble RMSD. Interestingly it is still possible to recover a structure with partial contact information, although wrong contacts can lead to dramatic loss in reconstruction fidelity. Conclusions Thus contact maps represent a valid approximation to the structures with an accuracy comparable to that of experimental methods. The optimal contact definitions constitute key guidelines for methods based on contact maps such as structure prediction through

  8. Application of the entropy generation minimization method to a solar heat exchanger: A pseudo-optimization design process based on the analysis of the local entropy generation maps

    International Nuclear Information System (INIS)

    Giangaspero, Giorgio; Sciubba, Enrico

    2013-01-01

    This paper presents an application of the entropy generation minimization method to the pseudo-optimization of the configuration of the heat exchange surfaces in a Solar Rooftile. An initial “standard” commercial configuration is gradually improved by introducing design changes aimed at the reduction of the thermodynamic losses due to heat transfer and fluid friction. Different geometries (pins, fins and others) are analysed with a commercial CFD (Computational Fluid Dynamics) code that also computes the local entropy generation rate. The design improvement process is carried out on the basis of a careful analysis of the local entropy generation maps and the rationale behind each step of the process is discussed in this perspective. The results are compared with other entropy generation minimization techniques available in the recent technical literature. It is found that the geometry with pin-fins has the best performance among the tested ones, and that the optimal pin array shape parameters (pitch and span) can be determined by a critical analysis of the integrated and local entropy maps and of the temperature contours. - Highlights: ► An entropy generation minimization method is applied to a solar heat exchanger. ► The approach is heuristic and leads to a pseudo-optimization process with CFD as main tool. ► The process is based on the evaluation of the local entropy generation maps. ► The geometry with pin-fins in general outperforms all other configurations. ► The entropy maps and temperature contours can be used to determine the optimal pin array design parameters

  9. Application of ecological mapping

    International Nuclear Information System (INIS)

    Sherk, J.A.

    1982-01-01

    The US Fish and Wildlife Service has initiated the production of a comprehensive ecological inventory map series for use as a major new planning tool. Important species data along with special land use designations are displayed on 1:250,000 scale topographic base maps. Sets of maps have been published for the Atlantic and Pacific coastal areas of the United States. Preparation of a map set for the Gulf of Mexico is underway at the present time. Potential application of ecological inventory map series information to a typical land disposal facility could occur during the narrowing of the number of possible disposal sites, the design of potential disposal site studies of ecological resources, the preparation of the environmental report, and the regulatory review of license applications. 3 figures, 3 tables

  10. Development of optimized segmentation map in dual energy computed tomography

    Science.gov (United States)

    Yamakawa, Keisuke; Ueki, Hironori

    2012-03-01

    Dual energy computed tomography (DECT) has been widely used in clinical practice and has been particularly effective for tissue diagnosis. In DECT the difference of two attenuation coefficients acquired by two kinds of X-ray energy enables tissue segmentation. One problem in conventional DECT is that the segmentation deteriorates in some cases, such as bone removal. This is due to two reasons. Firstly, the segmentation map is optimized without considering the Xray condition (tube voltage and current). If we consider the tube voltage, it is possible to create an optimized map, but unfortunately we cannot consider the tube current. Secondly, the X-ray condition is not optimized. The condition can be set empirically, but this means that the optimized condition is not used correctly. To solve these problems, we have developed methods for optimizing the map (Method-1) and the condition (Method-2). In Method-1, the map is optimized to minimize segmentation errors. The distribution of the attenuation coefficient is modeled by considering the tube current. In Method-2, the optimized condition is decided to minimize segmentation errors depending on tube voltagecurrent combinations while keeping the total exposure constant. We evaluated the effectiveness of Method-1 by performing a phantom experiment under the fixed condition and of Method-2 by performing a phantom experiment under different combinations calculated from the total exposure constant. When Method-1 was followed with Method-2, the segmentation error was reduced from 37.8 to 13.5 %. These results demonstrate that our developed methods can achieve highly accurate segmentation while keeping the total exposure constant.

  11. Hysteresis compensation of the Prandtl-Ishlinskii model for piezoelectric actuators using modified particle swarm optimization with chaotic map.

    Science.gov (United States)

    Long, Zhili; Wang, Rui; Fang, Jiwen; Dai, Xufei; Li, Zuohua

    2017-07-01

    Piezoelectric actuators invariably exhibit hysteresis nonlinearities that tend to become significant under the open-loop condition and could cause oscillations and errors in nanometer-positioning tasks. Chaotic map modified particle swarm optimization (MPSO) is proposed and implemented to identify the Prandtl-Ishlinskii model for piezoelectric actuators. Hysteresis compensation is attained through application of an inverse Prandtl-Ishlinskii model, in which the parameters are formulated based on the original model with chaotic map MPSO. To strengthen the diversity and improve the searching ergodicity of the swarm, an initial method of adaptive inertia weight based on a chaotic map is proposed. To compare and prove that the swarm's convergence occurs before stochastic initialization and to attain an optimal particle swarm optimization algorithm, the parameters of a proportional-integral-derivative controller are searched using self-tuning, and the simulated results are used to verify the search effectiveness of chaotic map MPSO. The results show that chaotic map MPSO is superior to its competitors for identifying the Prandtl-Ishlinskii model and that the inverse Prandtl-Ishlinskii model can provide hysteresis compensation under different conditions in a simple and effective manner.

  12. Addressing the Influence of Hidden State on Wireless Network Optimizations using Performance Maps

    DEFF Research Database (Denmark)

    Højgaard-Hansen, Kim; Madsen, Tatiana Kozlova; Schwefel, Hans-Peter

    2015-01-01

    be used to optimize the use of the wireless net- work by predicting future network performance and scheduling the net- work communication for certain applications on mobile devices. However, other important factors influence the performance of the wireless communication such as changes in the propagation...... environment and resource sharing. In this work we extend the framework of performance maps for wireless networks by introducing network state as an abstraction for all other factors than location that influence the performance. Since network state might not always be directly observable the framework......Performance of wireless connectivity for network client devices is location dependent. It has been shown that it can be beneficial to collect network performance metrics along with location information to generate maps of the location dependent network performance. These performance maps can...

  13. Robust and flexible mapping for real-time distributed applications during the early design phases

    DEFF Research Database (Denmark)

    Gan, Junhe; Pop, Paul; Gruian, Flavius

    2012-01-01

    has a high chance of being schedulable, considering the wcet uncertainties, whereas a flexible mapping has a high chance to successfully accommodate the future scenarios. We propose a Genetic Algorithm-based approach to solve this optimization problem. Extensive experiments show the importance......We are interested in mapping hard real-time applications on distributed heterogeneous architectures. An application is modeled as a set of tasks, and we consider a fixed-priority preemptive scheduling policy. We target the early design phases, when decisions have a high impact on the subsequent...... in the functionality requirements are captured using “future scenarios”, which are task sets that model functionality likely to be added in the future. In this context, we derive a mapping of tasks in the application, such that the resulted implementation is both robust and flexible. Robust means that the application...

  14. NEW DEVELOPMENTS ON INVERSE POLYGON MAPPING TO CALCULATE GRAVITATIONAL LENSING MAGNIFICATION MAPS: OPTIMIZED COMPUTATIONS

    International Nuclear Information System (INIS)

    Mediavilla, E.; Lopez, P.; Mediavilla, T.; Ariza, O.; Muñoz, J. A.; Gonzalez-Morcillo, C.; Jimenez-Vicente, J.

    2011-01-01

    We derive an exact solution (in the form of a series expansion) to compute gravitational lensing magnification maps. It is based on the backward gravitational lens mapping of a partition of the image plane in polygonal cells (inverse polygon mapping, IPM), not including critical points (except perhaps at the cell boundaries). The zeroth-order term of the series expansion leads to the method described by Mediavilla et al. The first-order term is used to study the error induced by the truncation of the series at zeroth order, explaining the high accuracy of the IPM even at this low order of approximation. Interpreting the Inverse Ray Shooting (IRS) method in terms of IPM, we explain the previously reported N –3/4 dependence of the IRS error with the number of collected rays per pixel. Cells intersected by critical curves (critical cells) transform to non-simply connected regions with topological pathologies like auto-overlapping or non-preservation of the boundary under the transformation. To define a non-critical partition, we use a linear approximation of the critical curve to divide each critical cell into two non-critical subcells. The optimal choice of the cell size depends basically on the curvature of the critical curves. For typical applications in which the pixel of the magnification map is a small fraction of the Einstein radius, a one-to-one relationship between the cell and pixel sizes in the absence of lensing guarantees both the consistence of the method and a very high accuracy. This prescription is simple but very conservative. We show that substantially larger cells can be used to obtain magnification maps with huge savings in computation time.

  15. Applicability of vulnerability maps

    International Nuclear Information System (INIS)

    Andersen, L.J.; Gosk, E.

    1989-01-01

    A number of aspects to vulnerability maps are discussed: the vulnerability concept, mapping purposes, possible users, and applicability of vulnerability maps. Problems associated with general-type vulnerability mapping, including large-scale maps, universal pollutant, and universal pollution scenario are also discussed. An alternative approach to vulnerability assessment - specific vulnerability mapping for limited areas, specific pollutant, and predefined pollution scenario - is suggested. A simplification of the vulnerability concept is proposed in order to make vulnerability mapping more objective and by this means more comparable. An extension of the vulnerability concept to the rest of the hydrogeological cycle (lakes, rivers, and the sea) is proposed. Some recommendations regarding future activities are given

  16. Efficient algorithms for multidimensional global optimization in genetic mapping of complex traits

    Directory of Open Access Journals (Sweden)

    Kajsa Ljungberg

    2010-10-01

    Full Text Available Kajsa Ljungberg1, Kateryna Mishchenko2, Sverker Holmgren11Division of Scientific Computing, Department of Information Technology, Uppsala University, Uppsala, Sweden; 2Department of Mathematics and Physics, Mälardalen University College, Västerås, SwedenAbstract: We present a two-phase strategy for optimizing a multidimensional, nonconvex function arising during genetic mapping of quantitative traits. Such traits are believed to be affected by multiple so called QTL, and searching for d QTL results in a d-dimensional optimization problem with a large number of local optima. We combine the global algorithm DIRECT with a number of local optimization methods that accelerate the final convergence, and adapt the algorithms to problem-specific features. We also improve the evaluation of the QTL mapping objective function to enable exploitation of the smoothness properties of the optimization landscape. Our best two-phase method is demonstrated to be accurate in at least six dimensions and up to ten times faster than currently used QTL mapping algorithms.Keywords: global optimization, QTL mapping, DIRECT 

  17. Applications of combinatorial optimization

    CERN Document Server

    Paschos, Vangelis Th

    2013-01-01

    Combinatorial optimization is a multidisciplinary scientific area, lying in the interface of three major scientific domains: mathematics, theoretical computer science and management. The three volumes of the Combinatorial Optimization series aims to cover a wide range of topics in this area. These topics also deal with fundamental notions and approaches as with several classical applications of combinatorial optimization. "Applications of Combinatorial Optimization" is presenting a certain number among the most common and well-known applications of Combinatorial Optimization.

  18. Regularity of optimal transport maps on multiple products of spheres

    OpenAIRE

    Figalli, Alessio; Kim, Young-Heon; McCann, Robert J.

    2010-01-01

    This article addresses regularity of optimal transport maps for cost="squared distance" on Riemannian manifolds that are products of arbitrarily many round spheres with arbitrary sizes and dimensions. Such manifolds are known to be non-negatively cross-curved [KM2]. Under boundedness and non-vanishing assumptions on the transfered source and target densities we show that optimal maps stay away from the cut-locus (where the cost exhibits singularity), and obtain injectivity and continuity of o...

  19. Realization of universal optimal quantum machines by projective operators and stochastic maps

    International Nuclear Information System (INIS)

    Sciarrino, F.; Sias, C.; Ricci, M.; De Martini, F.

    2004-01-01

    Optimal quantum machines can be implemented by linear projective operations. In the present work a general qubit symmetrization theory is presented by investigating the close links to the qubit purification process and to the programmable teleportation of any generic optimal antiunitary map. In addition, the contextual realization of the N→M cloning map and of the teleportation of the N→(M-N) universal-NOT (UNOT) gate is analyzed by a very general angular momentum theory. An extended set of experimental realizations by state symmetrization linear optical procedures is reported. These include the 1→2 cloning process, the UNOT gate and the quantum tomographic characterization of the optimal partial transpose map of polarization encoded qubits

  20. A Space-Mapping Framework for Engineering Optimization: Theory and Implementation

    DEFF Research Database (Denmark)

    Koziel, Slawomir; Bandler, John W.; Madsen, Kaj

    2006-01-01

    a region of interest. Output space mapping ensures the matching of responses and first-order derivatives between the mapped coarse model and the fine model at the current iteration point in the optimization process. We provide theoretical results that show the importance of the explicit use of sensitivity...... information to the convergence properties of our family of algorithms. Our algorithm is demonstrated on the optimization of a microstrip band-pass filter, a band-pass filter with double-coupled resonators and a seven-section impedance transformer. We describe the novel user-oriented software package SMF...

  1. Networking for large-scale science: infrastructure, provisioning, transport and application mapping

    International Nuclear Information System (INIS)

    Rao, Nageswara S; Carter, Steven M; Wu Qishi; Wing, William R; Zhu Mengxia; Mezzacappa, Anthony; Veeraraghavan, Malathi; Blondin, John M

    2005-01-01

    Large-scale science computations and experiments require unprecedented network capabilities in the form of large bandwidth and dynamically stable connections to support data transfers, interactive visualizations, and monitoring and steering operations. A number of component technologies dealing with the infrastructure, provisioning, transport and application mappings must be developed and/or optimized to achieve these capabilities. We present a brief account of the following technologies that contribute toward achieving these network capabilities: (a) DOE UltraScienceNet and NSF CHEETAH network testbeds that provide on-demand and scheduled dedicated network connections; (b) experimental results on transport protocols that achieve close to 100% utilization on dedicated 1Gbps wide-area channels; (c) a scheme for optimally mapping a visualization pipeline onto a network to minimize the end-to-end delays; and (d) interconnect configuration and protocols that provides multiple Gbps flows from Cray X1 to external hosts

  2. Networking for large-scale science: infrastructure, provisioning, transport and application mapping

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S [Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Carter, Steven M [Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Wu Qishi [Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Wing, William R [Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Zhu Mengxia [Department of Computer Science, Louisiana State University, Baton Rouge, LA 70803 (United States); Mezzacappa, Anthony [Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Veeraraghavan, Malathi [Department of Computer Science, University of Virginia, Charlottesville, VA 22904 (United States); Blondin, John M [Department of Physics, North Carolina State University, Raleigh, NC 27695 (United States)

    2005-01-01

    Large-scale science computations and experiments require unprecedented network capabilities in the form of large bandwidth and dynamically stable connections to support data transfers, interactive visualizations, and monitoring and steering operations. A number of component technologies dealing with the infrastructure, provisioning, transport and application mappings must be developed and/or optimized to achieve these capabilities. We present a brief account of the following technologies that contribute toward achieving these network capabilities: (a) DOE UltraScienceNet and NSF CHEETAH network testbeds that provide on-demand and scheduled dedicated network connections; (b) experimental results on transport protocols that achieve close to 100% utilization on dedicated 1Gbps wide-area channels; (c) a scheme for optimally mapping a visualization pipeline onto a network to minimize the end-to-end delays; and (d) interconnect configuration and protocols that provides multiple Gbps flows from Cray X1 to external hosts.

  3. Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch

    Science.gov (United States)

    Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.

    2014-10-01

    The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.

  4. Robust fluence map optimization via alternating direction method of multipliers with empirical parameter optimization

    International Nuclear Information System (INIS)

    Gao, Hao

    2016-01-01

    For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT. (paper)

  5. Globally optimal superconducting magnets part I: minimum stored energy (MSE) current density map.

    Science.gov (United States)

    Tieng, Quang M; Vegh, Viktor; Brereton, Ian M

    2009-01-01

    An optimal current density map is crucial in magnet design to provide the initial values within search spaces in an optimization process for determining the final coil arrangement of the magnet. A strategy for obtaining globally optimal current density maps for the purpose of designing magnets with coaxial cylindrical coils in which the stored energy is minimized within a constrained domain is outlined. The current density maps obtained utilising the proposed method suggests that peak current densities occur around the perimeter of the magnet domain, where the adjacent peaks have alternating current directions for the most compact designs. As the dimensions of the domain are increased, the current density maps yield traditional magnet designs of positive current alone. These unique current density maps are obtained by minimizing the stored magnetic energy cost function and therefore suggest magnet coil designs of minimal system energy. Current density maps are provided for a number of different domain arrangements to illustrate the flexibility of the method and the quality of the achievable designs.

  6. Constellation and Mapping Optimization of APSK Modulations used in DVB-S2

    Directory of Open Access Journals (Sweden)

    L. Jordanova

    2014-10-01

    Full Text Available This article represents the algorithms of APSK constellation and mapping optimization. The dependencies of the symbol error probability Ps on the parameters of the 16APSK and 32APSK constellations are examined and several options that satisfy the requirements to the minimum value of Ps are selected. Mapping optimization is carried out for the selected APSK constellations. BER characteristics of the satellite DVB-S2 channels are represented when using optimized and standard 16APSK and 32APSK constellations and a comparative analysis of the results achieved is made.

  7. Prederivatives of gamma paraconvex set-valued maps and Pareto optimality conditions for set optimization problems.

    Science.gov (United States)

    Huang, Hui; Ning, Jixian

    2017-01-01

    Prederivatives play an important role in the research of set optimization problems. First, we establish several existence theorems of prederivatives for γ -paraconvex set-valued mappings in Banach spaces with [Formula: see text]. Then, in terms of prederivatives, we establish both necessary and sufficient conditions for the existence of Pareto minimal solution of set optimization problems.

  8. Proficient brain for optimal performance: the MAP model perspective.

    Science.gov (United States)

    Bertollo, Maurizio; di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio

    2016-01-01

    Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the "neural efficiency hypothesis." We also observed more ERD as related to optimal-controlled performance in conditions of "neural adaptability" and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques.

  9. Optimization : insights and applications

    CERN Document Server

    Brinkhuis, Jan

    2005-01-01

    This self-contained textbook is an informal introduction to optimization through the use of numerous illustrations and applications. The focus is on analytically solving optimization problems with a finite number of continuous variables. In addition, the authors provide introductions to classical and modern numerical methods of optimization and to dynamic optimization. The book's overarching point is that most problems may be solved by the direct application of the theorems of Fermat, Lagrange, and Weierstrass. The authors show how the intuition for each of the theoretical results can be s

  10. WE-AB-209-10: Optimizing the Delivery of Sequential Fluence Maps for Efficient VMAT Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Craft, D [Massachusetts General Hospital, Cambridge, MA (United States); Balvert, M [Tilburg University, Tilburg (Netherlands)

    2016-06-15

    Purpose: To develop an optimization model and solution approach for computing MLC leaf trajectories and dose rates for high quality matching of a set of optimized fluence maps to be delivered sequentially around a patient in a VMAT treatment. Methods: We formulate the fluence map matching problem as a nonlinear optimization problem where time is discretized but dose rates and leaf positions are continuous variables. For a given allotted time, which is allocated across the fluence maps based on the complexity of each fluence map, the optimization problem searches for the best leaf trajectories and dose rates such that the original fluence maps are closely recreated. Constraints include maximum leaf speed, maximum dose rate, and leaf collision avoidance, as well as the constraint that the ending leaf positions for one map are the starting leaf positions for the next map. The resulting model is non-convex but smooth, and therefore we solve it by local searches from a variety of starting positions. We improve solution time by a custom decomposition approach which allows us to decouple the rows of the fluence maps and solve each leaf pair individually. This decomposition also makes the problem easily parallelized. Results: We demonstrate method on a prostate case and a head-and-neck case and show that one can recreate fluence maps to high degree of fidelity in modest total delivery time (minutes). Conclusion: We present a VMAT sequencing method that reproduces optimal fluence maps by searching over a vast number of possible leaf trajectories. By varying the total allotted time given, this approach is the first of its kind to allow users to produce VMAT solutions that span the range of wide-field coarse VMAT deliveries to narrow-field high-MU sliding window-like approaches.

  11. Proficient brain for optimal performance: the MAP model perspective

    Directory of Open Access Journals (Sweden)

    Maurizio Bertollo

    2016-05-01

    Full Text Available Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1 and optimal-controlled (Type 2 performances. Methods. Ten elite shooters (6 male and 4 female with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the “neural efficiency hypothesis.” We also observed more ERD as related to optimal-controlled performance in conditions of “neural adaptability” and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques.

  12. Near constant-time optimal piecewise LDR to HDR inverse tone mapping

    Science.gov (United States)

    Chen, Qian; Su, Guan-Ming; Yin, Peng

    2015-02-01

    In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.

  13. Assessing and Optimizing Argo profile mapping : An example in the Equatorial Pacific

    Science.gov (United States)

    Gasparin, Florent; Roemmich, Dean; Gilson, John; Sprintall, Janet

    2014-05-01

    Estimation of subsurface temperature, salinity and velocity has been revolutionized over the last decade as a result of development and deployment of the Argo Program. Argo products have become one of the major observational tools in Oceanography, used in a wide range of basic research, operational models, and education applications. To assess the skill of Argo in estimating oceanic conditions at different scales of variability in the Equatorial Pacific, we optimize Argo profile mapping by focusing on the covariance function. Decorrelation scales are discussed as well as impacts of several different interpolation schemes. The optimization is based on three points 1) Functional representation of the Argo sampled covariance, 2) Realism/Accuracy of the mapping errors and 3) Comparison with independent data such as TAO moorings and sea surface height. The last points show that Argo can represent more than 90% of the total TAO variance and around 80% of the intraseasonal TAO variability (between 10 and 100 days) at the Equator. As an illustration of the improvement, we show how Argo profiles can reveal the vertical structure and vertical phase propagation corresponding to the steric height annual cycle. We also discuss how this unique equatorial wave phenomena is modified during El Nino/La Nina events. This work anticipates a field experiment beginning in early 2014 and can be used for assessing and adapting the equatorial observational network.

  14. Dynamic Feedforward Control of a Diesel Engine Based on Optimal Transient Compensation Maps

    Directory of Open Access Journals (Sweden)

    Giorgio Mancini

    2014-08-01

    Full Text Available To satisfy the increasingly stringent emission regulations and a demand for an ever lower fuel consumption, diesel engines have become complex systems with many interacting actuators. As a consequence, these requirements are pushing control and calibration to their limits. The calibration procedure nowadays is still based mainly on engineering experience, which results in a highly iterative process to derive a complete engine calibration. Moreover, automatic tools are available only for stationary operation, to obtain control maps that are optimal with respect to some predefined objective function. Therefore, the exploitation of any leftover potential during transient operation is crucial. This paper proposes an approach to derive a transient feedforward (FF control system in an automated way. It relies on optimal control theory to solve a dynamic optimization problem for fast transients. A partially physics-based model is thereby used to replace the engine. From the optimal solutions, the relevant information is extracted and stored in maps spanned by the engine speed and the torque gradient. These maps complement the static control maps by accounting for the dynamic behavior of the engine. The procedure is implemented on a real engine and experimental results are presented along with the development of the methodology.

  15. An optimization approach for extracting and encoding consistent maps in a shape collection

    KAUST Repository

    Huang, Qi-Xing

    2012-11-01

    We introduce a novel approach for computing high quality point-topoint maps among a collection of related shapes. The proposed approach takes as input a sparse set of imperfect initial maps between pairs of shapes and builds a compact data structure which implicitly encodes an improved set of maps between all pairs of shapes. These maps align well with point correspondences selected from initial maps; they map neighboring points to neighboring points; and they provide cycle-consistency, so that map compositions along cycles approximate the identity map. The proposed approach is motivated by the fact that a complete set of maps between all pairs of shapes that admits nearly perfect cycleconsistency are highly redundant and can be represented by compositions of maps through a single base shape. In general, multiple base shapes are needed to adequately cover a diverse collection. Our algorithm sequentially extracts such a small collection of base shapes and creates correspondences from each of these base shapes to all other shapes. These correspondences are found by global optimization on candidate correspondences obtained by diffusing initial maps. These are then used to create a compact graphical data structure from which globally optimal cycle-consistent maps can be extracted using simple graph algorithms. Experimental results on benchmark datasets show that the proposed approach yields significantly better results than state-of-theart data-driven shape matching methods. © 2012 ACM.

  16. Subpixel Mapping of Hyperspectral Image Based on Linear Subpixel Feature Detection and Object Optimization

    Science.gov (United States)

    Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan

    2018-04-01

    Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.

  17. Topology Optimization of Passive Micromixers Based on Lagrangian Mapping Method

    Directory of Open Access Journals (Sweden)

    Yuchen Guo

    2018-03-01

    Full Text Available This paper presents an optimization-based design method of passive micromixers for immiscible fluids, which means that the Peclet number infinitely large. Based on topology optimization method, an optimization model is constructed to find the optimal layout of the passive micromixers. Being different from the topology optimization methods with Eulerian description of the convection-diffusion dynamics, this proposed method considers the extreme case, where the mixing is dominated completely by the convection with negligible diffusion. In this method, the mixing dynamics is modeled by the mapping method, a Lagrangian description that can deal with the case with convection-dominance. Several numerical examples have been presented to demonstrate the validity of the proposed method.

  18. IHadoop: Asynchronous iterations for MapReduce

    KAUST Repository

    Elnikety, Eslam Mohamed Ibrahim

    2011-11-01

    MapReduce is a distributed programming frame-work designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications; tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop\\'s task scheduler exploits inter-iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application\\'s latency. This paper also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches

  19. IHadoop: Asynchronous iterations for MapReduce

    KAUST Repository

    Elnikety, Eslam Mohamed Ibrahim; El Sayed, Tamer S.; Ramadan, Hany E.

    2011-01-01

    MapReduce is a distributed programming frame-work designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications; tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop's task scheduler exploits inter-iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application's latency. This paper also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches

  20. The EnMAP-Box—A Toolbox and Application Programming Interface for EnMAP Data Processing

    Directory of Open Access Journals (Sweden)

    Sebastian van der Linden

    2015-09-01

    Full Text Available The EnMAP-Box is a toolbox that is developed for the processing and analysis of data acquired by the German spaceborne imaging spectrometer EnMAP (Environmental Mapping and Analysis Program. It is developed with two aims in mind in order to guarantee full usage of future EnMAP data, i.e., (1 extending the EnMAP user community and (2 providing access to recent approaches for imaging spectroscopy data processing. The software is freely available and offers a range of tools and applications for the processing of spectral imagery, including classical processing tools for imaging spectroscopy data as well as powerful machine learning approaches or interfaces for the integration of methods available in scripting languages. A special developer version includes the full open source code, an application programming interface and an application wizard for easy integration and documentation of new developments. This paper gives an overview of the EnMAP-Box for users and developers, explains typical workflows along an application example and exemplifies the concept for making it a frequently used and constantly extended platform for imaging spectroscopy applications.

  1. Optimal task mapping in safety-critical real-time parallel systems

    International Nuclear Information System (INIS)

    Aussagues, Ch.

    1998-01-01

    This PhD thesis is dealing with the correct design of safety-critical real-time parallel systems. Such systems constitutes a fundamental part of high-performance systems for command and control that can be found in the nuclear domain or more generally in parallel embedded systems. The verification of their temporal correctness is the core of this thesis. our contribution is mainly in the following three points: the analysis and extension of a programming model for such real-time parallel systems; the proposal of an original method based on a new operator of synchronized product of state machines task-graphs; the validation of the approach by its implementation and evaluation. The work addresses particularly the main problem of optimal task mapping on a parallel architecture, such that the temporal constraints are globally guaranteed, i.e. the timeliness property is valid. The results incorporate also optimally criteria for the sizing and correct dimensioning of a parallel system, for instance in the number of processing elements. These criteria are connected with operational constraints of the application domain. Our approach is based on the off-line analysis of the feasibility of the deadline-driven dynamic scheduling that is used to schedule tasks inside one processor. This leads us to define the synchronized-product, a system of linear, constraints is automatically generated and then allows to calculate a maximum load of a group of tasks and then to verify their timeliness constraints. The communications, their timeliness verification and incorporation to the mapping problem is the second main contribution of this thesis. FInally, the global solving technique dealing with both task and communication aspects has been implemented and evaluated in the framework of the OASIS project in the LETI research center at the CEA/Saclay. (author)

  2. Optimized packings with applications

    CERN Document Server

    Pintér, János

    2015-01-01

    This volume presents a selection of case studies that address a substantial range of optimized object packings (OOP) and their applications. The contributing authors are well-recognized researchers and practitioners. The mathematical modelling and numerical solution aspects of each application case study are presented in sufficient detail. A broad range of OOP problems are discussed: these include various specific and non-standard container loading and object packing problems, as well as the stowing of hazardous and other materials on container ships, data centre resource management, automotive engineering design, space station logistic support, cutting and packing problems with placement constraints, the optimal design of LED street lighting, robust sensor deployment strategies, spatial scheduling problems, and graph coloring models and metaheuristics for packing applications. Novel points of view related to model development and to computational nonlinear, global, mixed integer optimization and heuristic st...

  3. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  4. Colon flattening by landmark-driven optimal quasiconformal mapping.

    Science.gov (United States)

    Zeng, Wei; Yang, Yi-Jun

    2014-01-01

    In virtual colonoscopy, colon conformal flattening plays an important role, which unfolds the colon wall surface to a rectangle planar image and preserves local shapes by conformal mapping, so that the cancerous polyps and other abnormalities can be easily and thoroughly recognized and visualized without missing hidden areas. In such maps, the anatomical landmarks (taeniae coli, flexures, and haustral folds) are naturally mapped to convoluted curves on 2D domain, which poses difficulty for comparing shapes from geometric feature details. Understanding the nature of landmark curves to the whole surface structure is meaningful but it remains challenging and open. In this work, we present a novel and effective colon flattening method based on quasiconformal mapping, which straightens the main anatomical landmark curves with least conformality (angle) distortion. It provides a canonical and straightforward view of the long, convoluted and folded tubular colon surface. The computation is based on the holomorphic 1-form method with landmark straightening constraints and quasiconformal optimization, and has linear time complexity due to the linearity of 1-forms in each iteration. Experiments on various colon data demonstrate the efficiency and efficacy of our algorithm and its practicability for polyp detection and findings visualization; furthermore, the result reveals the geometric characteristics of anatomical landmarks on colon surfaces.

  5. Parameter Optimization for Quantitative Signal-Concentration Mapping Using Spoiled Gradient Echo MRI

    Directory of Open Access Journals (Sweden)

    Gasser Hathout

    2012-01-01

    Full Text Available Rationale and Objectives. Accurate signal to tracer concentration maps are critical to quantitative MRI. The purpose of this study was to evaluate and optimize spoiled gradient echo (SPGR MR sequences for the use of gadolinium (Gd-DTPA as a kinetic tracer. Methods. Water-gadolinium phantoms were constructed for a physiologic range of gadolinium concentrations. Observed and calculated SPGR signal to concentration curves were generated. Using a percentage error determination, optimal pulse parameters for signal to concentration mapping were obtained. Results. The accuracy of the SPGR equation is a function of the chosen MR pulse parameters, particularly the time to repetition (TR and the flip angle (FA. At all experimental values of TR, increasing FA decreases the ratio between observed and calculated signals. Conversely, for a constant FA, increasing TR increases this ratio. Using optimized pulse parameter sets, it is possible to achieve excellent accuracy (approximately 5% over a physiologic range of concentration tracer concentrations. Conclusion. Optimal pulse parameter sets exist and their use is essential for deriving accurate signal to concentration curves in quantitative MRI.

  6. Mapping sequences by parts

    Directory of Open Access Journals (Sweden)

    Guziolowski Carito

    2007-09-01

    Full Text Available Abstract Background: We present the N-map method, a pairwise and asymmetrical approach which allows us to compare sequences by taking into account evolutionary events that produce shuffled, reversed or repeated elements. Basically, the optimal N-map of a sequence s over a sequence t is the best way of partitioning the first sequence into N parts and placing them, possibly complementary reversed, over the second sequence in order to maximize the sum of their gapless alignment scores. Results: We introduce an algorithm computing an optimal N-map with time complexity O (|s| × |t| × N using O (|s| × |t| × N memory space. Among all the numbers of parts taken in a reasonable range, we select the value N for which the optimal N-map has the most significant score. To evaluate this significance, we study the empirical distributions of the scores of optimal N-maps and show that they can be approximated by normal distributions with a reasonable accuracy. We test the functionality of the approach over random sequences on which we apply artificial evolutionary events. Practical Application: The method is illustrated with four case studies of pairs of sequences involving non-standard evolutionary events.

  7. Journal of Computer Science and Its Application: Site Map

    African Journals Online (AJOL)

    Journal of Computer Science and Its Application: Site Map. Journal Home > About the Journal > Journal of Computer Science and Its Application: Site Map. Log in or Register to get access to full text downloads.

  8. iHadoop: Asynchronous Iterations Support for MapReduce

    KAUST Repository

    Elnikety, Eslam

    2011-08-01

    MapReduce is a distributed programming framework designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications; tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop\\'s task scheduler exploits inter- iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application\\'s latency. This thesis also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches

  9. Riemannian metric optimization on surfaces (RMOS) for intrinsic brain mapping in the Laplace-Beltrami embedding space.

    Science.gov (United States)

    Gahm, Jin Kyu; Shi, Yonggang

    2018-05-01

    Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer's disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Colliding bodies optimization extensions and applications

    CERN Document Server

    Kaveh, A

    2015-01-01

    This book presents and applies a novel efficient meta-heuristic optimization algorithm called Colliding Bodies Optimization (CBO) for various optimization problems. The first part of the book introduces the concepts and methods involved, while the second is devoted to the applications. Though optimal design of structures is the main topic, two chapters on optimal analysis and applications in constructional management are also included.  This algorithm is based on one-dimensional collisions between bodies, with each agent solution being considered as an object or body with mass. After a collision of two moving bodies with specified masses and velocities, these bodies again separate, with new velocities. This collision causes the agents to move toward better positions in the search space.  The main algorithm (CBO) is internally parameter independent, setting it apart from previously developed meta-heuristics. This algorithm is enhanced (ECBO) for more efficient applications in the optimal design of structures...

  11. Frontiers in Optimization : Theory and Applications

    CERN Document Server

    Maulik, Ujjwal; Li, Xiang; FOTA 2016; Operations Research and Optimization

    2018-01-01

    This book discusses recent developments in the vast domain of optimization. Featuring papers presented at the 1st International Conference on Frontiers in Optimization: Theory and Applications (FOTA 2016), held at the Heritage Institute of Technology, Kolkata, on 24–26 December 2016, it opens new avenues of research in all topics related to optimization, such as linear and nonlinear optimization; combinatorial-, stochastic-, dynamic-, fuzzy-, and uncertain optimization; optimal control theory; as well as multi-objective, evolutionary and convex optimization and their applications in intelligent information and technology, systems science, knowledge management, information and communication, supply chain and inventory control, scheduling, networks, transportation and logistics and finance. The book is a valuable resource for researchers, scientists and engineers from both academia and industry.

  12. Optimal margin and edge-enhanced intensity maps in the presence of motion and uncertainty

    International Nuclear Information System (INIS)

    Chan, Timothy C Y; Tsitsiklis, John N; Bortfeld, Thomas

    2010-01-01

    In radiation therapy, intensity maps involving margins have long been used to counteract the effects of dose blurring arising from motion. More recently, intensity maps with increased intensity near the edge of the tumour (edge enhancements) have been studied to evaluate their ability to offset similar effects that affect tumour coverage. In this paper, we present a mathematical methodology to derive margin and edge-enhanced intensity maps that aim to provide tumour coverage while delivering minimum total dose. We show that if the tumour is at most about twice as large as the standard deviation of the blurring distribution, the optimal intensity map is a pure scaling increase of the static intensity map without any margins or edge enhancements. Otherwise, if the tumour size is roughly twice (or more) the standard deviation of motion, then margins and edge enhancements are preferred, and we present formulae to calculate the exact dimensions of these intensity maps. Furthermore, we extend our analysis to include scenarios where the parameters of the motion distribution are not known with certainty, but rather can take any value in some range. In these cases, we derive a similar threshold to determine the structure of an optimal margin intensity map.

  13. Map-Based Power-Split Strategy Design with Predictive Performance Optimization for Parallel Hybrid Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Jixiang Fan

    2015-09-01

    Full Text Available In this paper, a map-based optimal energy management strategy is proposed to improve the consumption economy of a plug-in parallel hybrid electric vehicle. In the design of the maps, which provide both the torque split between engine and motor and the gear shift, not only the current vehicle speed and power demand, but also the optimality based on the predicted trajectory of vehicle dynamics are considered. To seek the optimality, the equivalent consumption, which trades off the fuel and electricity usages, is chosen as the cost function. Moreover, in order to decrease the model errors in the process of optimization conducted in the discrete time domain, the variational integrator is employed to calculate the evolution of the vehicle dynamics. To evaluate the proposed energy management strategy, the simulation results performed on a professional GT-Suit simulator are demonstrated and the comparison to a real-time optimization method is also given to show the advantage of the proposed off-line optimization approach.

  14. Nonlinear optimization of the modern synchrotron radiation storage ring based on frequency map analysis

    International Nuclear Information System (INIS)

    Tian Shunqiang; Liu Guimin; Hou Jie; Chen Guangling; Wan Chenglan; Li Haohu

    2009-01-01

    In this paper, we present a rule to improve the nonlinear solution with frequency map analysis (FMA), and without frequently revisiting the optimization algorithm. Two aspects of FMA are emphasized. The first one is the tune shift with amplitude, which can be used to improve the solution of harmonic sextupoles, and thus obtain a large dynamic aperture. The second one is the tune diffusion rate, which can be used to select a quiet tune. Application of these ideas is carried out in the storage ring of the Shanghai Synchrotron Radiation Facility (SSRF), and the detailed processes, as well as better solutions, are presented in this paper. Discussions about the nonlinear behaviors of off-momentum particles are also presented. (authors)

  15. Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.

    Science.gov (United States)

    Zaitsev, M; Steinhoff, S; Shah, N J

    2003-06-01

    A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.

  16. Optimization of Thermal Aspects of Friction Stir Welding – Initial Studies Using a Space Mapping Technique

    DEFF Research Database (Denmark)

    Larsen, Anders Astrup; Bendsøe, Martin P.; Schmidt, Henrik Nikolaj Blicher

    2007-01-01

    The aim of this paper is to optimize a thermal model of a friction stir welding process. The optimization is performed using a space mapping technique in which an analytical model is used along with the FEM model to be optimized. The results are compared to traditional gradient based optimization...

  17. Variational analysis of regular mappings theory and applications

    CERN Document Server

    Ioffe, Alexander D

    2017-01-01

    This monograph offers the first systematic account of (metric) regularity theory in variational analysis. It presents new developments alongside classical results and demonstrates the power of the theory through applications to various problems in analysis and optimization theory. The origins of metric regularity theory can be traced back to a series of fundamental ideas and results of nonlinear functional analysis and global analysis centered around problems of existence and stability of solutions of nonlinear equations. In variational analysis, regularity theory goes far beyond the classical setting and is also concerned with non-differentiable and multi-valued operators. The present volume explores all basic aspects of the theory, from the most general problems for mappings between metric spaces to those connected with fairly concrete and important classes of operators acting in Banach and finite dimensional spaces. Written by a leading expert in the field, the book covers new and powerful techniques, whic...

  18. Application of Google Maps API service for creating web map of information retrieved from CORINE land cover databases

    Directory of Open Access Journals (Sweden)

    Kilibarda Milan

    2010-01-01

    Full Text Available Today, Google Maps API application based on Ajax technology as standard web service; facilitate users with publication interactive web maps, thus opening new possibilities in relation to the classical analogue maps. CORINE land cover databases are recognized as the fundamental reference data sets for numerious spatial analysis. The theoretical and applicable aspects of Google Maps API cartographic service are considered on the case of creating web map of change in urban areas in Belgrade and surround from 2000. to 2006. year, obtained from CORINE databases.

  19. Global optimization of cyclic Kannan nonexpansive mappings in ...

    African Journals Online (AJOL)

    As an application of the existence theorem, we conclude an old fixed point problem in Banach spaces which are not reflexive necessarily. Examples are given to support the usability of our main conclusions. Keywords: Best proximity point, fixed point, cyclic Kannan nonexpansive mapping, T-uniformly semi-normal structure, ...

  20. Optimization strategies for complex engineering applications

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, M.S.

    1998-02-01

    LDRD research activities have focused on increasing the robustness and efficiency of optimization studies for computationally complex engineering problems. Engineering applications can be characterized by extreme computational expense, lack of gradient information, discrete parameters, non-converging simulations, and nonsmooth, multimodal, and discontinuous response variations. Guided by these challenges, the LDRD research activities have developed application-specific techniques, fundamental optimization algorithms, multilevel hybrid and sequential approximate optimization strategies, parallel processing approaches, and automatic differentiation and adjoint augmentation methods. This report surveys these activities and summarizes the key findings and recommendations.

  1. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2017-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 17-19, 2016. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, health, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  2. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2015-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 13-15, 2014. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, healthcare, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  3. Application of artificial neural network to predict the optimal start time for heating system in building

    International Nuclear Information System (INIS)

    Yang, In-Ho; Yeo, Myoung-Souk; Kim, Kwang-Woo

    2003-01-01

    The artificial neural network (ANN) approach is a generic technique for mapping non-linear relationships between inputs and outputs without knowing the details of these relationships. This paper presents an application of the ANN in a building control system. The objective of this study is to develop an optimized ANN model to determine the optimal start time for a heating system in a building. For this, programs for predicting the room air temperature and the learning of the ANN model based on back propagation learning were developed, and learning data for various building conditions were collected through program simulation for predicting the room air temperature using systems of experimental design. Then, the optimized ANN model was presented through learning of the ANN, and its performance to determine the optimal start time was evaluated

  4. An optimal strategy for functional mapping of dynamic trait loci.

    Science.gov (United States)

    Jin, Tianbo; Li, Jiahan; Guo, Ying; Zhou, Xiaojing; Yang, Runqing; Wu, Rongling

    2010-02-01

    As an emerging powerful approach for mapping quantitative trait loci (QTLs) responsible for dynamic traits, functional mapping models the time-dependent mean vector with biologically meaningful equations and are likely to generate biologically relevant and interpretable results. Given the autocorrelation nature of a dynamic trait, functional mapping needs the implementation of the models for the structure of the covariance matrix. In this article, we have provided a comprehensive set of approaches for modelling the covariance structure and incorporated each of these approaches into the framework of functional mapping. The Bayesian information criterion (BIC) values are used as a model selection criterion to choose the optimal combination of the submodels for the mean vector and covariance structure. In an example for leaf age growth from a rice molecular genetic project, the best submodel combination was found between the Gaussian model for the correlation structure, power equation of order 1 for the variance and the power curve for the mean vector. Under this combination, several significant QTLs for leaf age growth trajectories were detected on different chromosomes. Our model can be well used to study the genetic architecture of dynamic traits of agricultural values.

  5. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    Science.gov (United States)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  6. Applications of polynomial optimization in financial risk investment

    Science.gov (United States)

    Zeng, Meilan; Fu, Hongwei

    2017-09-01

    Recently, polynomial optimization has many important applications in optimization, financial economics and eigenvalues of tensor, etc. This paper studies the applications of polynomial optimization in financial risk investment. We consider the standard mean-variance risk measurement model and the mean-variance risk measurement model with transaction costs. We use Lasserre's hierarchy of semidefinite programming (SDP) relaxations to solve the specific cases. The results show that polynomial optimization is effective for some financial optimization problems.

  7. Usability analysis of indoor map application in a shopping centre

    Science.gov (United States)

    Dewi, R. S.; Hadi, R. K.

    2018-04-01

    Although indoor navigation is still new in Indonesia, its future development is very promising. Similar to the outdoor one, the indoor navigation technology provides several important functions to support route and landmark findings. Furthermore, there is also a need that indoor navigation can support the public safety especially during disaster evacuation process in a building. It is a common that the indoor navigation technologies are built as applications where users can access this technology using their smartphones, tablets, or personal computers. Therefore, a usability analysis is important to ensure the indoor navigation applications can be operated by users with highest functionality. Among several indoor map applications which were available in the market, this study chose to analyse indoor Google Maps due to its availability and popularity in Indonesia. The experiments to test indoor Google Maps was conducted in one of the biggest shopping centre building in Surabaya, Indonesia. The usability was measured by employing System Usability Scale (SUS) questionnaire. The result showed that the SUS score of indoor Google Maps was below the average score of other cellular applications to indicate the users still had high difficulty in operating and learning the features of indoor Google Maps.

  8. Possibilities of contactless control of web map applications by sight

    Directory of Open Access Journals (Sweden)

    Rostislav Netek

    2012-03-01

    Full Text Available This paper assesses possibilities of a new approach of control map applications on the screen without locomotive system. There is a project about usability of Eye Tracking System in Geoinformatic and Cartographic fields at Department of Geoinformatics at Palacky University. The eye tracking system is a device for measuring eye/gaze positions and eye/gaze movement ("where we are looking". There is a number of methods and outputs, but the most common are "heat-maps" of intensity and/or time. Just this method was used in the first part, where was analyzed the number of common web map portals, especially distribution of their tools and functions on the screen. The aim of research is to localize by heat-maps the best distribution of control tools for movement with map (function "pan". It can analyze how sensitive are people on perception of control tools in different web pages and platforms. It is a great experience to compare accurate survey data with personal interpretation and knowledge. Based on these results is the next step – design of "control tools" which is command by eye-tracking device. There has been elected rectangle areas located on the edge of map (AOI – areas of interest, with special function which have defined some time delay. When user localizes one of these areas the map automatically moves to the way on which edge is localized on, and time delay prevents accidental movement. The technology for recording the eye movements on the screen offers this option because if you properly define the layout and function controls of the map, you need only connect these two systems. At this moment, there is a technical constrain. The solution of movement control is based on data transmission between eye-tracking-device-output and converter in real-time. Just real-time transfer is not supported in every case of SMI (SensoMotoric Instruments company devices. More precisely it is the problem of money, because eye-tracking device and every

  9. Analysis and Optimization of Mixed-Criticality Applications on Partitioned Distributed Architectures

    DEFF Research Database (Denmark)

    Tamas-Selicean, Domitian; Marinescu, S. O.; Pop, Paul

    2012-01-01

    Constrained (RC) messages, transmitted if there are no TT messages, and Best Effort (BE) messages. We assume that applications are scheduled using Static Cyclic Scheduling (SCS) or Fixed-Priority Preemptive Scheduling (FPS). We are interested in analysis and optimization methods and tools, which decide...... within predefined time slots, allocated on each processor. At the communication-level, TTEthernet uses the concepts of virtual links for the separation of mixed-criticality messages. TTEthernet integrates three types of traffic: Time-Triggered (TT) messages, transmitted based on schedule tables, Rate...... the mapping of tasks to PEs, the sequence and length of the time partitions on each PE and the schedule tables of the SCS tasks and TT messages, such that the applications are schedulable and the response times of FPS tasks and RC messages is minimized. We have proposed a Tabu Search-based meta...

  10. Electromagnetic Problems Solving by Conformal Mapping: A Mathematical Operator for Optimization

    Directory of Open Access Journals (Sweden)

    Wesley Pacheco Calixto

    2010-01-01

    Full Text Available Having the property to modify only the geometry of a polygonal structure, preserving its physical magnitudes, the Conformal Mapping is an exceptional tool to solve electromagnetism problems with known boundary conditions. This work aims to introduce a new developed mathematical operator, based on polynomial extrapolation. This operator has the capacity to accelerate an optimization method applied in conformal mappings, to determinate the equipotential lines, the field lines, the capacitance, and the permeance of some polygonal geometry electrical devices with an inner dielectric of permittivity ε. The results obtained in this work are compared with other simulations performed by the software of finite elements method, Flux 2D.

  11. Nonreference Medical Image Edge Map Measure

    Directory of Open Access Journals (Sweden)

    Karen Panetta

    2014-01-01

    Full Text Available Edge detection is a key step in medical image processing. It is widely used to extract features, perform segmentation, and further assist in diagnosis. A poor quality edge map can result in false alarms and misses in cancer detection algorithms. Therefore, it is necessary to have a reliable edge measure to assist in selecting the optimal edge map. Existing reference based edge measures require a ground truth edge map to evaluate the similarity between the generated edge map and the ground truth. However, the ground truth images are not available for medical images. Therefore, a nonreference edge measure is ideal for medical image processing applications. In this paper, a nonreference reconstruction based edge map evaluation (NREM is proposed. The theoretical basis is that a good edge map keeps the structure and details of the original image thus would yield a good reconstructed image. The NREM is based on comparing the similarity between the reconstructed image with the original image using this concept. The edge measure is used for selecting the optimal edge detection algorithm and optimal parameters for the algorithm. Experimental results show that the quantitative evaluations given by the edge measure have good correlations with human visual analysis.

  12. Space-mapping techniques applied to the optimization of a safety isolating transformer

    NARCIS (Netherlands)

    T.V. Tran; S. Brisset; D. Echeverria (David); D.J.P. Lahaye (Domenico); P. Brochet

    2007-01-01

    textabstractSpace-mapping optimization techniques allow to allign low-fidelity and high-fidelity models in order to reduce the computational time and increase the accuracy of the solution. The main idea is to build an approximate model from the difference of response between both models. Therefore

  13. Local search for optimal global map generation using mid-decadal landsat images

    Science.gov (United States)

    Khatib, L.; Gasch, J.; Morris, Robert; Covington, S.

    2007-01-01

    NASA and the US Geological Survey (USGS) are seeking to generate a map of the entire globe using Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) sensor data from the "mid-decadal" period of 2004 through 2006. The global map is comprised of thousands of scene locations and, for each location, tens of different images of varying quality to chose from. Furthermore, it is desirable for images of adjacent scenes be close together in time of acquisition, to avoid obvious discontinuities due to seasonal changes. These characteristics make it desirable to formulate an automated solution to the problem of generating the complete map. This paper formulates a Global Map Generator problem as a Constraint Optimization Problem (GMG-COP) and describes an approach to solving it using local search. Preliminary results of running the algorithm on image data sets are summarized. The results suggest a significant improvement in map quality using constraint-based solutions. Copyright ?? 2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

  14. Calculating Water Thermodynamics in the Binding Site of Proteins - Applications of WaterMap to Drug Discovery.

    Science.gov (United States)

    Cappel, Daniel; Sherman, Woody; Beuming, Thijs

    2017-01-01

    The ability to accurately characterize the solvation properties (water locations and thermodynamics) of biomolecules is of great importance to drug discovery. While crystallography, NMR, and other experimental techniques can assist in determining the structure of water networks in proteins and protein-ligand complexes, most water molecules are not fully resolved and accurately placed. Furthermore, understanding the energetic effects of solvation and desolvation on binding requires an analysis of the thermodynamic properties of solvent involved in the interaction between ligands and proteins. WaterMap is a molecular dynamics-based computational method that uses statistical mechanics to describe the thermodynamic properties (entropy, enthalpy, and free energy) of water molecules at the surface of proteins. This method can be used to assess the solvent contributions to ligand binding affinity and to guide lead optimization. In this review, we provide a comprehensive summary of published uses of WaterMap, including applications to lead optimization, virtual screening, selectivity analysis, ligand pose prediction, and druggability assessment. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  15. Multiple Description Coding Based on Optimized Redundancy Removal for 3D Depth Map

    Directory of Open Access Journals (Sweden)

    Sen Han

    2016-06-01

    Full Text Available Multiple description (MD coding is a promising alternative for the robust transmission of information over error-prone channels. In 3D image technology, the depth map represents the distance between the camera and objects in the scene. Using the depth map combined with the existing multiview image, it can be efficient to synthesize images of any virtual viewpoint position, which can display more realistic 3D scenes. Differently from the conventional 2D texture image, the depth map contains a lot of spatial redundancy information, which is not necessary for view synthesis, but may result in the waste of compressed bits, especially when using MD coding for robust transmission. In this paper, we focus on the redundancy removal of MD coding based on the DCT (discrete cosine transform domain. In view of the characteristics of DCT coefficients, at the encoder, a Lagrange optimization approach is designed to determine the amounts of high frequency coefficients in the DCT domain to be removed. It is noted considering the low computing complexity that the entropy is adopted to estimate the bit rate in the optimization. Furthermore, at the decoder, adaptive zero-padding is applied to reconstruct the depth map when some information is lost. The experimental results have shown that compared to the corresponding scheme, the proposed method demonstrates better rate central and side distortion performance.

  16. Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT

    Directory of Open Access Journals (Sweden)

    Xiaohua Nie

    2017-01-01

    Full Text Available Cat Swarm Optimization (CSO algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO algorithm, the application of CSO is greatly limited by the drawback of “premature convergence,” that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment.

  17. Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT.

    Science.gov (United States)

    Nie, Xiaohua; Wang, Wei; Nie, Haoyao

    2017-01-01

    Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of "premature convergence," that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment.

  18. Drinking Water Mapping Application (DWMA) - Public Version

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Drinking Water Mapping Application (DWMA) is a web-based geographic information system (GIS) that enhances the capabilities to identify major contaminant risks...

  19. Mapping PetaSHA Applications to TeraGrid Architectures

    Science.gov (United States)

    Cui, Y.; Moore, R.; Olsen, K.; Zhu, J.; Dalguer, L. A.; Day, S.; Cruz-Atienza, V.; Maechling, P.; Jordan, T.

    2007-12-01

    accomplishments using the optimized code include the M7.8 ShakeOut rupture scenario, as part of the southern San Andreas Fault evaluation SoSAFE. The ShakeOut simulation domain is the same as used for the SCEC TeraShake simulations (600 km by 300 km by 80 km). However, the higher resolution of 100 m with frequency content up to 1 Hz required 14.4 billion grid points, eight times more than the TeraShake scenarios. The simulation used 2000 TACC Dell linux Lonestar processors and took 56 hours to compute 240 seconds of wave propagation. The pre-processing input partition, as well as post-processing analysis has been performed on the SDSC IBM Datastar p655 and p690. In addition, as part of the SCEC DynaShake computational platform, the SGSN capability was used to model dynamic rupture propagation for the ShakeOut scenario that match the proposed surface slip and size of the event. Mapping applications to different architectures require coordination of many areas of expertise in hardware and application level, an outstanding challenge faced on the current petascale computing effort. We believe our techniques as well as distributed data management through data grids have provided a practical example of how to effectively use multiple compute resources, and our results will benefit other geoscience disciplines as well.

  20. Minimal invasive epicardial lead implantation: optimizing cardiac resynchronization with a new mapping device for epicardial lead placement.

    Science.gov (United States)

    Maessen, J G; Phelps, B; Dekker, A L A J; Dijkman, B

    2004-05-01

    To optimize resynchronization in biventricular pacing with epicardial leads, mapping to determine the best pacing site, is a prerequisite. A port access surgical mapping technique was developed that allowed multiple pace site selection and reproducible lead evaluation and implantation. Pressure-volume loops analysis was used for real time guidance in targeting epicardial lead placement. Even the smallest changes in lead position revealed significantly different functional results. Optimizing the pacing site with this technique allowed functional improvement up to 40% versus random pace site selection.

  1. New Colors for Histology: Optimized Bivariate Color Maps Increase Perceptual Contrast in Histological Images.

    Science.gov (United States)

    Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K; Schad, Lothar R; Zöllner, Frank Gerrit

    2015-01-01

    Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin-3,3'-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics.

  2. Optimization and analysis of large chemical kinetic mechanisms using the solution mapping method - Combustion of methane

    Science.gov (United States)

    Frenklach, Michael; Wang, Hai; Rabinowitz, Martin J.

    1992-01-01

    A method of systematic optimization, solution mapping, as applied to a large-scale dynamic model is presented. The basis of the technique is parameterization of model responses in terms of model parameters by simple algebraic expressions. These expressions are obtained by computer experiments arranged in a factorial design. The developed parameterized responses are then used in a joint multiparameter multidata-set optimization. A brief review of the mathematical background of the technique is given. The concept of active parameters is discussed. The technique is applied to determine an optimum set of parameters for a methane combustion mechanism. Five independent responses - comprising ignition delay times, pre-ignition methyl radical concentration profiles, and laminar premixed flame velocities - were optimized with respect to thirteen reaction rate parameters. The numerical predictions of the optimized model are compared to those computed with several recent literature mechanisms. The utility of the solution mapping technique in situations where the optimum is not unique is also demonstrated.

  3. Common Fixed Points of Mappings and Set-Valued Mappings in Symmetric Spaces with Application to Probabilistic Spaces

    OpenAIRE

    M. Aamri; A. Bassou; S. Bennani; D. El Moutawakil

    2007-01-01

    The main purpose of this paper is to give some common fixed point theorems of mappings and set-valued mappings of a symmetric space with some applications to probabilistic spaces. In order to get these results, we define the concept of E-weak compatibility between set-valued and single-valued mappings of a symmetric space.

  4. Tuning of PID controller for an automatic regulator voltage system using chaotic optimization approach

    International Nuclear Information System (INIS)

    Santos Coelho, Leandro dos

    2009-01-01

    Despite the popularity, the tuning aspect of proportional-integral-derivative (PID) controllers is a challenge for researchers and plant operators. Various controllers tuning methodologies have been proposed in the literature such as auto-tuning, self-tuning, pattern recognition, artificial intelligence, and optimization methods. Chaotic optimization algorithms as an emergent method of global optimization have attracted much attention in engineering applications. Chaotic optimization algorithms, which have the features of easy implementation, short execution time and robust mechanisms of escaping from local optimum, is a promising tool for engineering applications. In this paper, a tuning method for determining the parameters of PID control for an automatic regulator voltage (AVR) system using a chaotic optimization approach based on Lozi map is proposed. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed chaotic optimization introduces chaos mapping using Lozi map chaotic sequences which increases its convergence rate and resulting precision. Simulation results are promising and show the effectiveness of the proposed approach. Numerical simulations based on proposed PID control of an AVR system for nominal system parameters and step reference voltage input demonstrate the good performance of chaotic optimization.

  5. Optimizing Crawler4j using MapReduce Programming Model

    Science.gov (United States)

    Siddesh, G. M.; Suresh, Kavya; Madhuri, K. Y.; Nijagal, Madhushree; Rakshitha, B. R.; Srinivasa, K. G.

    2017-06-01

    World wide web is a decentralized system that consists of a repository of information on the basis of web pages. These web pages act as a source of information or data in the present analytics world. Web crawlers are used for extracting useful information from web pages for different purposes. Firstly, it is used in web search engines where the web pages are indexed to form a corpus of information and allows the users to query on the web pages. Secondly, it is used for web archiving where the web pages are stored for later analysis phases. Thirdly, it can be used for web mining where the web pages are monitored for copyright purposes. The amount of information processed by the web crawler needs to be improved by using the capabilities of modern parallel processing technologies. In order to solve the problem of parallelism and the throughput of crawling this work proposes to optimize the Crawler4j using the Hadoop MapReduce programming model by parallelizing the processing of large input data. Crawler4j is a web crawler that retrieves useful information about the pages that it visits. The crawler Crawler4j coupled with data and computational parallelism of Hadoop MapReduce programming model improves the throughput and accuracy of web crawling. The experimental results demonstrate that the proposed solution achieves significant improvements with respect to performance and throughput. Hence the proposed approach intends to carve out a new methodology towards optimizing web crawling by achieving significant performance gain.

  6. Comparison of different chaotic maps in particle swarm optimization algorithm for long-term cascaded hydroelectric system scheduling

    International Nuclear Information System (INIS)

    He Yaoyao; Zhou Jianzhong; Xiang Xiuqiao; Chen Heng; Qin Hui

    2009-01-01

    The goal of this paper is to present a novel chaotic particle swarm optimization (CPSO) algorithm and compares the efficiency of three one-dimensional chaotic maps within symmetrical region for long-term cascaded hydroelectric system scheduling. The introduced chaotic maps improve the global optimal capability of CPSO algorithm. Moreover, a piecewise linear interpolation function is employed to transform all constraints into restrict upriver water level for implementing the maximum of objective function. Numerical results and comparisons demonstrate the effect and speed of different algorithms on a practical hydro-system.

  7. Constraining Influence Diagram Structure by Generative Planning: An Application to the Optimization of Oil Spill Response

    OpenAIRE

    Agosta, John Mark

    2013-01-01

    This paper works through the optimization of a real world planning problem, with a combination of a generative planning tool and an influence diagram solver. The problem is taken from an existing application in the domain of oil spill emergency response. The planning agent manages constraints that order sets of feasible equipment employment actions. This is mapped at an intermediate level of abstraction onto an influence diagram. In addition, the planner can apply a surveillance operator that...

  8. Optimized multiple linear mappings for single image super-resolution

    Science.gov (United States)

    Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo

    2017-12-01

    Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.

  9. Remote sensing sensors and applications in environmental resources mapping and modeling

    Science.gov (United States)

    Melesse, Assefa M.; Weng, Qihao; Thenkabail, Prasad S.; Senay, Gabriel B.

    2007-01-01

    The history of remote sensing and development of different sensors for environmental and natural resources mapping and data acquisition is reviewed and reported. Application examples in urban studies, hydrological modeling such as land-cover and floodplain mapping, fractional vegetation cover and impervious surface area mapping, surface energy flux and micro-topography correlation studies is discussed. The review also discusses the use of remotely sensed-based rainfall and potential evapotranspiration for estimating crop water requirement satisfaction index and hence provides early warning information for growers. The review is not an exhaustive application of the remote sensing techniques rather a summary of some important applications in environmental studies and modeling.

  10. Optimal task mapping in safety-critical real-time parallel systems; Placement optimal de taches pour les systemes paralleles temps-reel critiques

    Energy Technology Data Exchange (ETDEWEB)

    Aussagues, Ch

    1998-12-11

    This PhD thesis is dealing with the correct design of safety-critical real-time parallel systems. Such systems constitutes a fundamental part of high-performance systems for command and control that can be found in the nuclear domain or more generally in parallel embedded systems. The verification of their temporal correctness is the core of this thesis. our contribution is mainly in the following three points: the analysis and extension of a programming model for such real-time parallel systems; the proposal of an original method based on a new operator of synchronized product of state machines task-graphs; the validation of the approach by its implementation and evaluation. The work addresses particularly the main problem of optimal task mapping on a parallel architecture, such that the temporal constraints are globally guaranteed, i.e. the timeliness property is valid. The results incorporate also optimally criteria for the sizing and correct dimensioning of a parallel system, for instance in the number of processing elements. These criteria are connected with operational constraints of the application domain. Our approach is based on the off-line analysis of the feasibility of the deadline-driven dynamic scheduling that is used to schedule tasks inside one processor. This leads us to define the synchronized-product, a system of linear, constraints is automatically generated and then allows to calculate a maximum load of a group of tasks and then to verify their timeliness constraints. The communications, their timeliness verification and incorporation to the mapping problem is the second main contribution of this thesis. FInally, the global solving technique dealing with both task and communication aspects has been implemented and evaluated in the framework of the OASIS project in the LETI research center at the CEA/Saclay. (author) 96 refs.

  11. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  12. Acoustic methods for cavitation mapping in biomedical applications

    Science.gov (United States)

    Wan, M.; Xu, S.; Ding, T.; Hu, H.; Liu, R.; Bai, C.; Lu, S.

    2015-12-01

    In recent years, cavitation is increasingly utilized in a wide range of applications in biomedical field. Monitoring the spatial-temporal evolution of cavitation bubbles is of great significance for efficiency and safety in biomedical applications. In this paper, several acoustic methods for cavitation mapping proposed or modified on the basis of existing work will be presented. The proposed novel ultrasound line-by-line/plane-by-plane method can depict cavitation bubbles distribution with high spatial and temporal resolution and may be developed as a potential standard 2D/3D cavitation field mapping method. The modified ultrafast active cavitation mapping based upon plane wave transmission and reception as well as bubble wavelet and pulse inversion technique can apparently enhance the cavitation to tissue ratio in tissue and further assist in monitoring the cavitation mediated therapy with good spatial and temporal resolution. The methods presented in this paper will be a foundation to promote the research and development of cavitation imaging in non-transparent medium.

  13. Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.

    Science.gov (United States)

    Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik

    2014-06-16

    Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.

  14. An Iteration Scheme for Contraction Mappings with an Application to Synchronization of Discrete Logistic Maps

    Directory of Open Access Journals (Sweden)

    Ke Ding

    2017-01-01

    Full Text Available This paper deals with designing a new iteration scheme associated with a given scheme for contraction mappings. This new scheme has a similar structure to that of the given scheme, in which those two iterative schemes converge to the same fixed point of the given contraction mapping. The positive influence of feedback parameters on the convergence rate of this new scheme is investigated. Moreover, the derived convergence and comparison results can be extended to nonexpansive mappings. As an application, the derived results are utilized to study the synchronization of logistic maps. Two illustrated examples are used to reveal the effectiveness of our results.

  15. Optimizing Aspect-Oriented Mechanisms for Embedded Applications

    Science.gov (United States)

    Hundt, Christine; Stöhr, Daniel; Glesner, Sabine

    As applications for small embedded mobile devices are getting larger and more complex, it becomes inevitable to adopt more advanced software engineering methods from the field of desktop application development. Aspect-oriented programming (AOP) is a promising approach due to its advanced modularization capabilities. However, existing AOP languages tend to add a substantial overhead in both execution time and code size which restricts their practicality for small devices with limited resources. In this paper, we present optimizations for aspect-oriented mechanisms at the level of the virtual machine. Our experiments show that these optimizations yield a considerable performance gain along with a reduction of the code size. Thus, our optimizations establish the base for using advanced aspect-oriented modularization techniques for developing Java applications on small embedded devices.

  16. Application of ASTER SWIR bands in mapping anomaly pixels for Antarctic geological mapping

    International Nuclear Information System (INIS)

    Beiranvand Pour, Amin; Hashim, Mazlan; Park, Yongcheol

    2017-01-01

    Independent component analysis (ICA) was applied to shortwave infrared (SWIR) bands of ASTER satellite data for detailed mapping of alteration mineral zones in the context of polar environments, where little prior information is available. The Oscar II coast area north-eastern Graham Land, Antarctic Peninsula (AP) was selected to conduct a remote sensing satellite-based mapping approach to detect alteration mineral assemblages. Anomaly pixels in the ICA image maps related to spectral features of Al-O-H, Fe, Mg-O-H and CO3 groups were detected using SWIR datasets of ASTER. ICA method provided image maps of alteration mineral assemblages and discriminate lithological units with little available geological data for poorly mapped regions and/or without prior geological information for unmapped regions in northern and southern sectors of Oscar II coast area, Graham Land. The results of this investigation demonstrated the applicability of ASTER spectral data for lithological and alteration mineral mapping in poorly exposed lithologies and inaccessible regions, particularly using the image processing algorithm that are capable to detect anomaly pixels targets in the remotely sensed images, where no prior information is available. (paper)

  17. New Colors for Histology: Optimized Bivariate Color Maps Increase Perceptual Contrast in Histological Images.

    Directory of Open Access Journals (Sweden)

    Jakob Nikolas Kather

    Full Text Available Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions.In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin-3,3'-Diaminobenzidine (DAB images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images.To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images.Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics.

  18. Poster - 52: Smoothing constraints in Modulated Photon Radiotherapy (XMRT) fluence map optimization

    International Nuclear Information System (INIS)

    McGeachy, Philip; Villarreal-Barajas, Jose Eduardo; Zinchenko, Yuriy; Khan, Rao

    2016-01-01

    Purpose: Modulated Photon Radiotherapy (XMRT), which simultaneously optimizes photon beamlet energy (6 and 18 MV) and fluence, has recently shown dosimetric improvement in comparison to conventional IMRT. That said, the degree of smoothness of resulting fluence maps (FMs) has yet to be investigated and could impact the deliverability of XMRT. This study looks at investigating FM smoothness and imposing smoothing constraint in the fluence map optimization. Methods: Smoothing constraints were modeled in the XMRT algorithm with the sum of positive gradient (SPG) technique. XMRT solutions, with and without SPG constraints, were generated for a clinical prostate scan using standard dosimetric prescriptions, constraints, and a seven coplanar beam arrangement. The smoothness, with and without SPG constraints, was assessed by looking at the absolute and relative maximum SPG scores for each fluence map. Dose volume histograms were utilized when evaluating impact on the dose distribution. Results: Imposing SPG constraints reduced the absolute and relative maximum SPG values by factors of up to 5 and 2, respectively, when compared with their non-SPG constrained counterparts. This leads to a more seamless conversion of FMS to their respective MLC sequences. This improved smoothness resulted in an increase to organ at risk (OAR) dose, however the increase is not clinically significant. Conclusions: For a clinical prostate case, there was a noticeable improvement in the smoothness of the XMRT FMs when SPG constraints were applied with a minor increase in dose to OARs. This increase in OAR dose is not clinically meaningful.

  19. Poster - 52: Smoothing constraints in Modulated Photon Radiotherapy (XMRT) fluence map optimization

    Energy Technology Data Exchange (ETDEWEB)

    McGeachy, Philip; Villarreal-Barajas, Jose Eduardo; Zinchenko, Yuriy; Khan, Rao [Department of Medical Physics, CancerCare Manitoba, Winnipeg, MB, CAN, Department of Physics and Astronomy, University of Calgary, Calgary, AB, CAN, Department of Mathematics and Statistics, University of Calgary, Calgary, AB, CAN, Department of Radiation Oncology, Washington University School of Medicine, St Louis, MO (United States)

    2016-08-15

    Purpose: Modulated Photon Radiotherapy (XMRT), which simultaneously optimizes photon beamlet energy (6 and 18 MV) and fluence, has recently shown dosimetric improvement in comparison to conventional IMRT. That said, the degree of smoothness of resulting fluence maps (FMs) has yet to be investigated and could impact the deliverability of XMRT. This study looks at investigating FM smoothness and imposing smoothing constraint in the fluence map optimization. Methods: Smoothing constraints were modeled in the XMRT algorithm with the sum of positive gradient (SPG) technique. XMRT solutions, with and without SPG constraints, were generated for a clinical prostate scan using standard dosimetric prescriptions, constraints, and a seven coplanar beam arrangement. The smoothness, with and without SPG constraints, was assessed by looking at the absolute and relative maximum SPG scores for each fluence map. Dose volume histograms were utilized when evaluating impact on the dose distribution. Results: Imposing SPG constraints reduced the absolute and relative maximum SPG values by factors of up to 5 and 2, respectively, when compared with their non-SPG constrained counterparts. This leads to a more seamless conversion of FMS to their respective MLC sequences. This improved smoothness resulted in an increase to organ at risk (OAR) dose, however the increase is not clinically significant. Conclusions: For a clinical prostate case, there was a noticeable improvement in the smoothness of the XMRT FMs when SPG constraints were applied with a minor increase in dose to OARs. This increase in OAR dose is not clinically meaningful.

  20. Applications of metaheuristic optimization algorithms in civil engineering

    CERN Document Server

    Kaveh, A

    2017-01-01

    The book presents recently developed efficient metaheuristic optimization algorithms and their applications for solving various optimization problems in civil engineering. The concepts can also be used for optimizing problems in mechanical and electrical engineering.

  1. Using Concept Map Technique in Accounting Education: Uludag University Application

    OpenAIRE

    Ertan, Yasemin; Yücel, Elif; Saraç, Mehlika

    2014-01-01

    In recent years accounting applications become more complicated because of the growing markets and developing technology. Therefore the requirements of accounting education have increased and trying some new learning techniques have become necessary. Thus, this study was prepared to measure the contribution of concept map technique, used in accounting lessons, to the effect on students learning level. In the first part of the study, the concept map technique and its applications were explaine...

  2. On the efficiency of chaos optimization algorithms for global optimization

    International Nuclear Information System (INIS)

    Yang Dixiong; Li Gang; Cheng Gengdong

    2007-01-01

    Chaos optimization algorithms as a novel method of global optimization have attracted much attention, which were all based on Logistic map. However, we have noticed that the probability density function of the chaotic sequences derived from Logistic map is a Chebyshev-type one, which may affect the global searching capacity and computational efficiency of chaos optimization algorithms considerably. Considering the statistical property of the chaotic sequences of Logistic map and Kent map, the improved hybrid chaos-BFGS optimization algorithm and the Kent map based hybrid chaos-BFGS algorithm are proposed. Five typical nonlinear functions with multimodal characteristic are tested to compare the performance of five hybrid optimization algorithms, which are the conventional Logistic map based chaos-BFGS algorithm, improved Logistic map based chaos-BFGS algorithm, Kent map based chaos-BFGS algorithm, Monte Carlo-BFGS algorithm, mesh-BFGS algorithm. The computational performance of the five algorithms is compared, and the numerical results make us question the high efficiency of the chaos optimization algorithms claimed in some references. It is concluded that the efficiency of the hybrid optimization algorithms is influenced by the statistical property of chaotic/stochastic sequences generated from chaotic/stochastic algorithms, and the location of the global optimum of nonlinear functions. In addition, it is inappropriate to advocate the high efficiency of the global optimization algorithms only depending on several numerical examples of low-dimensional functions

  3. An optimized formulation for Deprit-type Lie transformations of Taylor maps for symplectic systems

    International Nuclear Information System (INIS)

    Shi, Jicong

    1993-01-01

    An optimized iterative formulation is presented for directly transforming a Taylor map of a symplectic system into a Deprit-type Lie transformation, which is a composition of a linear transfer matrix and a single Lie transformation, to an arbitrary order

  4. Optimizing memory use in Java applications, garbage collectors

    Directory of Open Access Journals (Sweden)

    Ştefan PREDA

    2016-05-01

    Full Text Available Java applications are diverse, depending by use case, exist application that use small amount of memory till application that use huge amount, tens or hundreds of gigabits. Java Virtual Machine is designed to automatically manage memory for applications. Even in this case due diversity of hardware, software that coexist on the same system and applications itself, these automatic decision need to be accompanied by developer or system administrator to triage optimal memory use. After developer big role to write optimum code from memory allocation perspective , optimizing memory use at Java Virtual Machine and application level become in last year's one of the most important task. This is explained in special due increased demand in applications scalability.

  5. Short-term cascaded hydroelectric system scheduling based on chaotic particle swarm optimization using improved logistic map

    Science.gov (United States)

    He, Yaoyao; Yang, Shanlin; Xu, Qifa

    2013-07-01

    In order to solve the model of short-term cascaded hydroelectric system scheduling, a novel chaotic particle swarm optimization (CPSO) algorithm using improved logistic map is introduced, which uses the water discharge as the decision variables combined with the death penalty function. According to the principle of maximum power generation, the proposed approach makes use of the ergodicity, symmetry and stochastic property of improved logistic chaotic map for enhancing the performance of particle swarm optimization (PSO) algorithm. The new hybrid method has been examined and tested on two test functions and a practical cascaded hydroelectric system. The experimental results show that the effectiveness and robustness of the proposed CPSO algorithm in comparison with other traditional algorithms.

  6. Mapping carbon flux uncertainty and selecting optimal locations for future flux towers in the Great Plains

    Science.gov (United States)

    Gu, Yingxin; Howard, Daniel M.; Wylie, Bruce K.; Zhang, Li

    2012-01-01

    Flux tower networks (e. g., AmeriFlux, Agriflux) provide continuous observations of ecosystem exchanges of carbon (e. g., net ecosystem exchange), water vapor (e. g., evapotranspiration), and energy between terrestrial ecosystems and the atmosphere. The long-term time series of flux tower data are essential for studying and understanding terrestrial carbon cycles, ecosystem services, and climate changes. Currently, there are 13 flux towers located within the Great Plains (GP). The towers are sparsely distributed and do not adequately represent the varieties of vegetation cover types, climate conditions, and geophysical and biophysical conditions in the GP. This study assessed how well the available flux towers represent the environmental conditions or "ecological envelopes" across the GP and identified optimal locations for future flux towers in the GP. Regression-based remote sensing and weather-driven net ecosystem production (NEP) models derived from different extrapolation ranges (10 and 50%) were used to identify areas where ecological conditions were poorly represented by the flux tower sites and years previously used for mapping grassland fluxes. The optimal lands suitable for future flux towers within the GP were mapped. Results from this study provide information to optimize the usefulness of future flux towers in the GP and serve as a proxy for the uncertainty of the NEP map.

  7. Criteria for the optimal selection of remote sensing optical images to map event landslides

    Science.gov (United States)

    Fiorucci, Federica; Giordan, Daniele; Santangelo, Michele; Dutto, Furio; Rossi, Mauro; Guzzetti, Fausto

    2018-01-01

    Landslides leave discernible signs on the land surface, most of which can be captured in remote sensing images. Trained geomorphologists analyse remote sensing images and map landslides through heuristic interpretation of photographic and morphological characteristics. Despite a wide use of remote sensing images for landslide mapping, no attempt to evaluate how the image characteristics influence landslide identification and mapping exists. This paper presents an experiment to determine the effects of optical image characteristics, such as spatial resolution, spectral content and image type (monoscopic or stereoscopic), on landslide mapping. We considered eight maps of the same landslide in central Italy: (i) six maps obtained through expert heuristic visual interpretation of remote sensing images, (ii) one map through a reconnaissance field survey, and (iii) one map obtained through a real-time kinematic (RTK) differential global positioning system (dGPS) survey, which served as a benchmark. The eight maps were compared pairwise and to a benchmark. The mismatch between each map pair was quantified by the error index, E. Results show that the map closest to the benchmark delineation of the landslide was obtained using the higher resolution image, where the landslide signature was primarily photographical (in the landslide source and transport area). Conversely, where the landslide signature was mainly morphological (in the landslide deposit) the best mapping result was obtained using the stereoscopic images. Albeit conducted on a single landslide, the experiment results are general, and provide useful information to decide on the optimal imagery for the production of event, seasonal and multi-temporal landslide inventory maps.

  8. Application of Ifsar Technology in Topographic Mapping: JUPEM's Experience

    Science.gov (United States)

    Zakaria, Ahamad

    2018-05-01

    The application of Interferometric Synthetic Aperture Radar (IFSAR) in topographic mapping has increased during the past decades. This is due to the advantages that IFSAR technology offers in solving data acquisition problems in tropical regions. Unlike aerial photography, radar technology offers wave penetration through cloud cover, fog and haze. As a consequence, images can be made free of any natural phenomenon defects. In Malaysia, Department of Survey and Mapping Malaysia (JUPEM) has been utilizing the IFSAR products since 2009 to update topographic maps at 1 : 50,000 map scales. Orthorectified radar imagery (ORI), Digital Surface Models (DSM) and Digital Terrain Models (DTM) procured under the project have been further processed before the products are ingested into a revamped mapping workflow consisting of stereo and mono digitizing processes. The paper will highlight the experience of Department of Survey and Mapping Malaysia (DSMM)/ JUPEM in using such technology in order to speed up mapping production.

  9. Submonotone mappings in Banach spaces and applications

    International Nuclear Information System (INIS)

    Georgiev, P.G.

    1995-11-01

    The notions 'submonotone' and 'strictly submonotone' mapping, introduced by J. Spingarn in R n , are extended in a natural way to arbitrary Banach spaces. Several results about monotone operators are proved for submonotone and strictly submonotone ones: Rockafellar's result about local boundedness of monotone operators; Kenderov's result about single-valuedness and upper-semicontinuity almost everywhere of monotone operators in Asplund spaces; minimality (as w * - cusco mappings) of maximal strictly submonotone mappings, etc. It is shown that subdifferentials of various classes non-convex functions defined as pointwise suprema of quasi-differentiable functions possess submonotone properties. Results about generic differentiability of such functions are obtained (among them are new generalizations of an Ekeland and Lebourg's theorem). Applications are given to the properties of the distance function in a Banach space with uniformly Gateaux differentiable norm. (author). 29 refs

  10. Applications of automatic differentiation in topology optimization

    DEFF Research Database (Denmark)

    Nørgaard, Sebastian A.; Sagebaum, Max; Gauger, Nicolas R.

    2017-01-01

    The goal of this article is to demonstrate the applicability and to discuss the advantages and disadvantages of automatic differentiation in topology optimization. The technique makes it possible to wholly or partially automate the evaluation of derivatives for optimization problems and is demons...

  11. Generalized logistic map and its application in chaos based cryptography

    Science.gov (United States)

    Lawnik, M.

    2017-12-01

    The logistic map is commonly used in, for example, chaos based cryptography. However, its properties do not render a safe construction of encryption algorithms. Thus, the scope of the paper is a proposal of generalization of the logistic map by means of a wellrecognized family of chaotic maps. In the next step, an analysis of Lyapunov exponent and the distribution of the iterative variable are studied. The obtained results confirm that the analyzed model can safely and effectively replace a classic logistic map for applications involving chaotic cryptography.

  12. 3D-mapping optimization of embodied energy of transportation

    Energy Technology Data Exchange (ETDEWEB)

    Pearce, Joshua M.; Johnson, Sara J. [Clarion University of Pennsylvania, Physics Department, Clarion, PA 16214 (United States); Grant, Gabriel B. [Purdue University, West Lafayette, IN (United States)

    2007-08-15

    The recent development of Google Earth, an information service that provides imagery and three-dimensional data depicting the entire Earth, provides an opportunity to use a new method of navigating information to save energy in the real world. Google Earth uses Keyhole Markup Language (KML) for modeling and storing geographic features and information for display in the Google Earth Client. This paper will analyze the potential of this novel and free geographic mapping service to reduce embodied energy of transportation in two ways. First, at the consumer level, Google Earth will be studied to map the automobile route that uses the least fuel and maintains vehicle velocities at their individual maximum fuel efficiency. The same analysis for single destination trips could be used to optimize fleet vehicle routes such as garbage or recycling collection trucks. The secondary benefit of ecological education will also be explored. Fuel used could be converted into monetary units based on the current price of gas, pollution/greenhouse gas emissions, or ecological footprints to improve driving habits. Secondly, KML overlays will be analyzed for use of determining: (1) raw material and products availability as a function of location, and (2) modes of transportation as a function of emissions. These overlays would enable manufacturers access to an easily navigable method to optimize the life cycle of their products by minimizing embodied energy of transportation. The most efficient transportation methods and travel routes could be calculated. This same tool would be useful for architects to obtain Leadership in Energy and Environmental Design rating points for the green design of buildings. Overall, the analysis completed finds that the flexibility and visual display of quantitative information made available by Google Earth could have a significant impact at conserving fuel resources by reducing the embodied energy of transportation on a global scale. (author)

  13. Application of colony complex algorithm to nuclear component optimization design

    International Nuclear Information System (INIS)

    Yan Changqi; Li Guijing; Wang Jianjun

    2014-01-01

    Complex algorithm (CA) has got popular application to the region of nuclear engineering. In connection with the specific features of the application of traditional complex algorithm (TCA) to the optimization design in engineering structures, an improved method, colony complex algorithm (CCA), was developed based on the optimal combination of many complexes, in which the disadvantages of TCA were overcame. The optimized results of benchmark function show that CCA has better optimizing performance than TCA. CCA was applied to the high-pressure heater optimization design, and the optimization effect is obvious. (authors)

  14. 9th International Conference on Optimization : Techniques and Applications

    CERN Document Server

    Wang, Song; Wu, Soon-Yi

    2015-01-01

    This book presents the latest research findings and state-of-the-art solutions on optimization techniques and provides new research direction and developments. Both the theoretical and practical aspects of the book will be much beneficial to experts and students in optimization and operation research community. It selects high quality papers from The International Conference on Optimization: Techniques and Applications (ICOTA2013). The conference is an official conference series of POP (The Pacific Optimization Research Activity Group; there are over 500 active members). These state-of-the-art works in this book authored by recognized experts will make contributions to the development of optimization with its applications.

  15. Research on the Application of Rapid Surveying and Mapping for Large Scare Topographic Map by Uav Aerial Photography System

    Science.gov (United States)

    Gao, Z.; Song, Y.; Li, C.; Zeng, F.; Wang, F.

    2017-08-01

    Rapid acquisition and processing method of large scale topographic map data, which relies on the Unmanned Aerial Vehicle (UAV) low-altitude aerial photogrammetry system, is studied in this paper, elaborating the main work flow. Key technologies of UAV photograph mapping is also studied, developing a rapid mapping system based on electronic plate mapping system, thus changing the traditional mapping mode and greatly improving the efficiency of the mapping. Production test and achievement precision evaluation of Digital Orth photo Map (DOM), Digital Line Graphic (DLG) and other digital production were carried out combined with the city basic topographic map update project, which provides a new techniques for large scale rapid surveying and has obvious technical advantage and good application prospect.

  16. Fluence map optimization (FMO) with dose–volume constraints in IMRT using the geometric distance sorting method

    International Nuclear Information System (INIS)

    Lan Yihua; Li Cunhua; Ren Haozheng; Zhang Yong; Min Zhifang

    2012-01-01

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose–volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose–volume constraints, and then the dose constraints for the voxels violating the dose–volume constraints are gradually added into the quadratic optimization model step by step until all the dose–volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head–neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than

  17. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    Science.gov (United States)

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose

  18. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce.

    Science.gov (United States)

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.

  19. Simulating Photon Mapping for Real-time Applications

    DEFF Research Database (Denmark)

    Larsen, Bent Dalgaard; Christensen, Niels Jørgen

    2004-01-01

    This paper introduces a novel method for simulating photon mapping for real-time applications. First we introduce a new method for selectively redistributing photons. Then we describe a method for selectively updating the indirect illumination. The indirect illumination is calculated using a new...... GPU accelerated final gathering method and the illumination is then stored in light maps. Caustic photons are traced on the CPU and then drawn using points in the framebuffer, and finally filtered using the GPU. Both diffuse and non-diffuse surfaces can be handled by calculating the direct...... illumination on the GPU and the photon tracing on the CPU. We achieve real-time frame rates for dynamic scenes....

  20. Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui

    Science.gov (United States)

    2012-01-01

    Background The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. Methods This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Results Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. Conclusions This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics

  1. 5th International Conference on Optimization and Control with Applications

    CERN Document Server

    Teo, Kok; Zhang, Yi

    2014-01-01

    This book presents advances in state-of-the-art solution methods and their applications to real life practical problems in optimization, control and operations research. Contributions from world-class experts in the field are collated here in two parts, dealing first with optimization and control theory and then with techniques and applications. Topics covered in the first part include control theory on infinite dimensional Banach spaces, history-dependent inclusion and linear programming complexity theory. Chapters also explore the use of approximations of Hamilton-Jacobi-Bellman inequality for solving periodic optimization problems and look at multi-objective semi-infinite optimization problems, and production planning problems.  In the second part, the authors address techniques and applications of optimization and control in a variety of disciplines, such as chaos synchronization, facial expression recognition and dynamic input-output economic models. Other applications considered here include image retr...

  2. Redrawing the solar map of South Africa for photovoltaic applications

    Energy Technology Data Exchange (ETDEWEB)

    Munzhedzi, R.; Sebitosi, A.B. [Electrical Engineering, University of Cape Town, Private Bag, Rm 522.2 Menzies Building, Rondebosch 7701, Cape Town (South Africa)

    2009-01-15

    The South African solar map has been redrawn to make it applicable to photovoltaic installations. This has been done with the aim of reducing the cost of solar PV installations in South Africa through accurate energy resource assessment and competent system design. Climate data software as well as solar design software was used to aid this process. The new map provides an alternative to the map in current use, which only considers radiation, whereas many more factors affect the output of a panel, such as wind, cloud cover and humidity. All these are taken into account when drawing the new map. (author)

  3. Selecting Map Projections in Minimizing Area Distortions in GIS Applications

    Directory of Open Access Journals (Sweden)

    Ahmet Kaya

    2008-12-01

    Full Text Available Varioussoftware for Geographical Information Systems (GISs have been developed and used in many different engineering projects. In GIS applications, map coverage is important in terms of performing reliable and meaningful queries. Map projections can be conformal, equal-area and equidistant. The goal of an application plays an important role in choosing one of those projections. Choosing the equal-area projection for an application in which area information is used (forestry, agriculture, ecosystem etc reduces the amount of distortion on the area, but many users using GIS ignore this fact and continue to use applications with present map sheets no matter in what map projection it is. For example, extracting area information from data whose country system’s map sheet is in conformal projection is relatively more distorted, compared to an equal-area projection one. The goal of this study is to make the best decision in choosing the most proper equal-area projection among the choices provided by ArcGIS 9.0, which is a popular GIS software package, and making a comparison on area errors when conformal projection is used. In this study, the area of parcels chosen in three different regions and geographic coordinates and whose sizes vary between 0.01 to 1,000,000 ha are calculated according to Transversal Mercator (TM, 3°, Universal Transversal Mercator (UTM, 6° and 14 different equal-area projections existing in the ArcGIS 9.0 GIS software package. The parcel areas calculated with geographical coordinates are accepted as definite. The difference between the sizes calculated according to projection coordinates and real sizes of the parcels are determined. Consequently, the appropriate projections are decided for the areas smaller and equal than 1,000 ha and greater than 1,000 ha in the GIS software package.

  4. Closed-Loop Optimal Control Implementations for Space Applications

    Science.gov (United States)

    2016-12-01

    with standard linear algebra techniques if is converted to a diagonal square matrix by multiplying by the identity matrix, I , as was done in (1.134...OPTIMAL CONTROL IMPLEMENTATIONS FOR SPACE APPLICATIONS by Colin S. Monk December 2016 Thesis Advisor: Mark Karpenko Second Reader: I. M...COVERED Master’s thesis, Jan-Dec 2016 4. TITLE AND SUBTITLE CLOSED-LOOP OPTIMAL CONTROL IMPLEMENTATIONS FOR SPACE APPLICATIONS 5. FUNDING NUMBERS

  5. Continuous nonlinear optimization for engineering applications in GAMS technology

    CERN Document Server

    Andrei, Neculai

    2017-01-01

    This book presents the theoretical details and computational performances of algorithms used for solving continuous nonlinear optimization applications imbedded in GAMS. Aimed toward scientists and graduate students who utilize optimization methods to model and solve problems in mathematical programming, operations research, business, engineering, and industry, this book enables readers with a background in nonlinear optimization and linear algebra to use GAMS technology to understand and utilize its important capabilities to optimize algorithms for modeling and solving complex, large-scale, continuous nonlinear optimization problems or applications. Beginning with an overview of constrained nonlinear optimization methods, this book moves on to illustrate key aspects of mathematical modeling through modeling technologies based on algebraically oriented modeling languages. Next, the main feature of GAMS, an algebraically oriented language that allows for high-level algebraic representation of mathematical opti...

  6. Introduction: Special issue on advances in topobathymetric mapping, models, and applications

    Science.gov (United States)

    Gesch, Dean B.; Brock, John C.; Parrish, Christopher E.; Rogers, Jeffrey N.; Wright, C. Wayne

    2016-01-01

    Detailed knowledge of near-shore topography and bathymetry is required for many geospatial data applications in the coastal environment. New data sources and processing methods are facilitating development of seamless, regional-scale topobathymetric digital elevation models. These elevation models integrate disparate multi-sensor, multi-temporal topographic and bathymetric datasets to provide a coherent base layer for coastal science applications such as wetlands mapping and monitoring, sea-level rise assessment, benthic habitat mapping, erosion monitoring, and storm impact assessment. The focus of this special issue is on recent advances in the source data, data processing and integration methods, and applications of topobathymetric datasets.

  7. Nonlinear Maps and their Applications 2011 International Workshop

    CERN Document Server

    Fournier-Prunaret, Daniele; Ueta, Tetsushi; Nishio, Yoshifumi

    2014-01-01

    In the field of Dynamical Systems, nonlinear iterative processes play an important role. Nonlinear mappings can be found as immediate models for many systems from different scientific areas, such as engineering, economics, biology, or can also be obtained via numerical methods permitting to solve non-linear differential equations. In both cases, the understanding of specific dynamical behaviors and phenomena is of the greatest interest for scientists. This volume contains papers that were presented at the International Workshop on Nonlinear Maps and their Applications (NOMA 2011) held in Évora, Portugal, on September 15-16, 2011. This kind of collaborative effort is of paramount importance in promoting communication among the various groups that work in dynamical systems and networks in their research theoretical studies as well as for applications. This volume is suitable for graduate students as well as researchers in the field.

  8. Integrating pipeline data management application and Google maps dataset on web based GIS application unsing open source technology Sharp Map and Open Layers

    Energy Technology Data Exchange (ETDEWEB)

    Wisianto, Arie; Sania, Hidayatus [PT PERTAMINA GAS, Bontang (Indonesia); Gumilar, Oki [PT PERTAMINA GAS, Jakarta (Indonesia)

    2010-07-01

    PT Pertamina Gas operates 3 pipe segments carrying natural gas from producers to PT Pupuk Kaltim in the Kalimantan area. The company wants to build a pipeline data management system consisting of pipeline facilities, inspections and risk assessments which would run on Geographic Information Systems (GIS) platforms. The aim of this paper is to present the integration of the pipeline data management system with GIS. A web based GIS application is developed using the combination of Google maps datasets with local spatial datasets. In addition, Open Layers is used to integrate pipeline data model and Google Map dataset into a single map display on Sharp Map. The GIS based pipeline data management system developed herein constitutes a low cost, powerful and efficient web based GIS solution.

  9. Engineering applications of heuristic multilevel optimization methods

    Science.gov (United States)

    Barthelemy, Jean-Francois M.

    1989-01-01

    Some engineering applications of heuristic multilevel optimization methods are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem optimizations is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.

  10. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps

    Energy Technology Data Exchange (ETDEWEB)

    Ureba, A. [Dpto. Fisiología Médica y Biofísica. Facultad de Medicina, Universidad de Sevilla, E-41009 Sevilla (Spain); Salguero, F. J. [Nederlands Kanker Instituut, Antoni van Leeuwenhoek Ziekenhuis, 1066 CX Ámsterdam, The Nederlands (Netherlands); Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Leal, A., E-mail: alplaza@us.es [Dpto. Fisiología Médica y Biofísica, Facultad de Medicina, Universidad de Sevilla, E-41009 Sevilla (Spain); Miras, H. [Servicio de Radiofísica, Hospital Universitario Virgen Macarena, E-41009 Sevilla (Spain); Linares, R.; Perucha, M. [Servicio de Radiofísica, Hospital Infanta Luisa, E-41010 Sevilla (Spain)

    2014-08-15

    Purpose: The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. Methods: The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called “biophysical” map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Results: Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast

  11. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps

    International Nuclear Information System (INIS)

    Ureba, A.; Salguero, F. J.; Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Leal, A.; Miras, H.; Linares, R.; Perucha, M.

    2014-01-01

    Purpose: The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. Methods: The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called “biophysical” map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Results: Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast

  12. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps.

    Science.gov (United States)

    Ureba, A; Salguero, F J; Barbeiro, A R; Jimenez-Ortega, E; Baeza, J A; Miras, H; Linares, R; Perucha, M; Leal, A

    2014-08-01

    The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called "biophysical" map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved

  13. Improved sliced velocity map imaging apparatus optimized for H photofragments.

    Science.gov (United States)

    Ryazanov, Mikhail; Reisler, Hanna

    2013-04-14

    Time-sliced velocity map imaging (SVMI), a high-resolution method for measuring kinetic energy distributions of products in scattering and photodissociation reactions, is challenging to implement for atomic hydrogen products. We describe an ion optics design aimed at achieving SVMI of H fragments in a broad range of kinetic energies (KE), from a fraction of an electronvolt to a few electronvolts. In order to enable consistently thin slicing for any imaged KE range, an additional electrostatic lens is introduced in the drift region for radial magnification control without affecting temporal stretching of the ion cloud. Time slices of ∼5 ns out of a cloud stretched to ⩾50 ns are used. An accelerator region with variable dimensions (using multiple electrodes) is employed for better optimization of radial and temporal space focusing characteristics at each magnification level. The implemented system was successfully tested by recording images of H fragments from the photodissociation of HBr, H2S, and the CH2OH radical, with kinetic energies ranging from 3 eV. It demonstrated KE resolution ≲1%-2%, similar to that obtained in traditional velocity map imaging followed by reconstruction, and to KE resolution achieved previously in SVMI of heavier products. We expect it to perform just as well up to at least 6 eV of kinetic energy. The tests showed that numerical simulations of the electric fields and ion trajectories in the system, used for optimization of the design and operating parameters, provide an accurate and reliable description of all aspects of system performance. This offers the advantage of selecting the best operating conditions in each measurement without the need for additional calibration experiments.

  14. The methods and applications of optimization of radiation protection

    International Nuclear Information System (INIS)

    Liu Hua

    2007-01-01

    Optimization is the most important principle in radiation protection. The present article briefs the concept and up-to-date progress of optimization of protection, introduces some methods used in current optimization analysis, and presents various applications of optimization of protection. The author emphasizes that optimization of protection is a forward-looking iterative process aimed at preventing exposures before they occur. (author)

  15. Nonlinear analysis approximation theory, optimization and applications

    CERN Document Server

    2014-01-01

    Many of our daily-life problems can be written in the form of an optimization problem. Therefore, solution methods are needed to solve such problems. Due to the complexity of the problems, it is not always easy to find the exact solution. However, approximate solutions can be found. The theory of the best approximation is applicable in a variety of problems arising in nonlinear functional analysis and optimization. This book highlights interesting aspects of nonlinear analysis and optimization together with many applications in the areas of physical and social sciences including engineering. It is immensely helpful for young graduates and researchers who are pursuing research in this field, as it provides abundant research resources for researchers and post-doctoral fellows. This will be a valuable addition to the library of anyone who works in the field of applied mathematics, economics and engineering.

  16. Genetic Algorithm and its Application in Optimal Sensor Layout

    Directory of Open Access Journals (Sweden)

    Xiang-Yang Chen

    2015-05-01

    Full Text Available This paper aims at the problem of multi sensor station distribution, based on multi- sensor systems of different types as the research object, in the analysis of various types of sensors with different application background, different indicators of demand, based on the different constraints, for all kinds of multi sensor station is studied, the application of genetic algorithms as a tool for the objective function of the models optimization, then the optimal various types of multi sensor station distribution plan, improve the performance of the system, and achieved good military effect. In the field of application of sensor radar, track measuring instrument, the satellite, passive positioning equipment of various types, specific problem, use care indicators and station arrangement between the mathematical model of geometry, using genetic algorithm to get the optimization results station distribution, to solve a variety of practical problems provides useful help, but also reflects the improved genetic algorithm in electronic weapon system based on multi sensor station distribution on the applicability and effectiveness of the optimization; finally the genetic algorithm for integrated optimization of multi sensor station distribution using the good to the training exercise tasks based on actual in, and have achieved good military effect.

  17. Optimal control novel directions and applications

    CERN Document Server

    Aronna, Maria; Kalise, Dante

    2017-01-01

    Focusing on applications to science and engineering, this book presents the results of the ITN-FP7 SADCO network’s innovative research in optimization and control in the following interconnected topics: optimality conditions in optimal control, dynamic programming approaches to optimal feedback synthesis and reachability analysis, and computational developments in model predictive control. The novelty of the book resides in the fact that it has been developed by early career researchers, providing a good balance between clarity and scientific rigor. Each chapter features an introduction addressed to PhD students and some original contributions aimed at specialist researchers. Requiring only a graduate mathematical background, the book is self-contained. It will be of particular interest to graduate and advanced undergraduate students, industrial practitioners and to senior scientists wishing to update their knowledge.

  18. Nature-inspired computing and optimization theory and applications

    CERN Document Server

    Yang, Xin-She; Nakamatsu, Kazumi

    2017-01-01

    The book provides readers with a snapshot of the state of the art in the field of nature-inspired computing and its application in optimization. The approach is mainly practice-oriented: each bio-inspired technique or algorithm is introduced together with one of its possible applications. Applications cover a wide range of real-world optimization problems: from feature selection and image enhancement to scheduling and dynamic resource management, from wireless sensor networks and wiring network diagnosis to sports training planning and gene expression, from topology control and morphological filters to nutritional meal design and antenna array design. There are a few theoretical chapters comparing different existing techniques, exploring the advantages of nature-inspired computing over other methods, and investigating the mixing time of genetic algorithms. The book also introduces a wide range of algorithms, including the ant colony optimization, the bat algorithm, genetic algorithms, the collision-based opti...

  19. A diagnostic algorithm to optimize data collection and interpretation of Ripple Maps in atrial tachycardias.

    Science.gov (United States)

    Koa-Wing, Michael; Nakagawa, Hiroshi; Luther, Vishal; Jamil-Copley, Shahnaz; Linton, Nick; Sandler, Belinda; Qureshi, Norman; Peters, Nicholas S; Davies, D Wyn; Francis, Darrel P; Jackman, Warren; Kanagaratnam, Prapa

    2015-11-15

    Ripple Mapping (RM) is designed to overcome the limitations of existing isochronal 3D mapping systems by representing the intracardiac electrogram as a dynamic bar on a surface bipolar voltage map that changes in height according to the electrogram voltage-time relationship, relative to a fiduciary point. We tested the hypothesis that standard approaches to atrial tachycardia CARTO™ activation maps were inadequate for RM creation and interpretation. From the results, we aimed to develop an algorithm to optimize RMs for future prospective testing on a clinical RM platform. CARTO-XP™ activation maps from atrial tachycardia ablations were reviewed by two blinded assessors on an off-line RM workstation. Ripple Maps were graded according to a diagnostic confidence scale (Grade I - high confidence with clear pattern of activation through to Grade IV - non-diagnostic). The RM-based diagnoses were corroborated against the clinical diagnoses. 43 RMs from 14 patients were classified as Grade I (5 [11.5%]); Grade II (17 [39.5%]); Grade III (9 [21%]) and Grade IV (12 [28%]). Causes of low gradings/errors included the following: insufficient chamber point density; window-of-interestRipple Maps in atrial tachycardias. This algorithm requires prospective testing on a real-time clinical platform. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. A new automatic synthetic aperture radar-based flood mapping application hosted on the European Space Agency's Grid Processing of Demand Fast Access to Imagery environment

    Science.gov (United States)

    Matgen, Patrick; Giustarini, Laura; Hostache, Renaud

    2012-10-01

    This paper introduces an automatic flood mapping application that is hosted on the Grid Processing on Demand (GPOD) Fast Access to Imagery (Faire) environment of the European Space Agency. The main objective of the online application is to deliver operationally flooded areas using both recent and historical acquisitions of SAR data. Having as a short-term target the flooding-related exploitation of data generated by the upcoming ESA SENTINEL-1 SAR mission, the flood mapping application consists of two building blocks: i) a set of query tools for selecting the "crisis image" and the optimal corresponding "reference image" from the G-POD archive and ii) an algorithm for extracting flooded areas via change detection using the previously selected "crisis image" and "reference image". Stakeholders in flood management and service providers are able to log onto the flood mapping application to get support for the retrieval, from the rolling archive, of the most appropriate reference image. Potential users will also be able to apply the implemented flood delineation algorithm. The latter combines histogram thresholding, region growing and change detection as an approach enabling the automatic, objective and reliable flood extent extraction from SAR images. Both algorithms are computationally efficient and operate with minimum data requirements. The case study of the high magnitude flooding event that occurred in July 2007 on the Severn River, UK, and that was observed with a moderateresolution SAR sensor as well as airborne photography highlights the performance of the proposed online application. The flood mapping application on G-POD can be used sporadically, i.e. whenever a major flood event occurs and there is a demand for SAR-based flood extent maps. In the long term, a potential extension of the application could consist in systematically extracting flooded areas from all SAR images acquired on a daily, weekly or monthly basis.

  1. Optimal design method to minimize users' thinking mapping load in human-machine interactions.

    Science.gov (United States)

    Huang, Yanqun; Li, Xu; Zhang, Jie

    2015-01-01

    The discrepancy between human cognition and machine requirements/behaviors usually results in serious mental thinking mapping loads or even disasters in product operating. It is important to help people avoid human-machine interaction confusions and difficulties in today's mental work mastered society. Improving the usability of a product and minimizing user's thinking mapping and interpreting load in human-machine interactions. An optimal human-machine interface design method is introduced, which is based on the purpose of minimizing the mental load in thinking mapping process between users' intentions and affordance of product interface states. By analyzing the users' thinking mapping problem, an operating action model is constructed. According to human natural instincts and acquired knowledge, an expected ideal design with minimized thinking loads is uniquely determined at first. Then, creative alternatives, in terms of the way human obtains operational information, are provided as digital interface states datasets. In the last, using the cluster analysis method, an optimum solution is picked out from alternatives, by calculating the distances between two datasets. Considering multiple factors to minimize users' thinking mapping loads, a solution nearest to the ideal value is found in the human-car interaction design case. The clustering results show its effectiveness in finding an optimum solution to the mental load minimizing problems in human-machine interaction design.

  2. Application of surrogate-based global optimization to aerodynamic design

    CERN Document Server

    Pérez, Esther

    2016-01-01

    Aerodynamic design, like many other engineering applications, is increasingly relying on computational power. The growing need for multi-disciplinarity and high fidelity in design optimization for industrial applications requires a huge number of repeated simulations in order to find an optimal design candidate. The main drawback is that each simulation can be computationally expensive – this becomes an even bigger issue when used within parametric studies, automated search or optimization loops, which typically may require thousands of analysis evaluations. The core issue of a design-optimization problem is the search process involved. However, when facing complex problems, the high-dimensionality of the design space and the high-multi-modality of the target functions cannot be tackled with standard techniques. In recent years, global optimization using meta-models has been widely applied to design exploration in order to rapidly investigate the design space and find sub-optimal solutions. Indeed, surrogat...

  3. Comparison of Genetic Algorithm and Hill Climbing for Shortest Path Optimization Mapping

    Directory of Open Access Journals (Sweden)

    Fronita Mona

    2018-01-01

    Full Text Available Traveling Salesman Problem (TSP is an optimization to find the shortest path to reach several destinations in one trip without passing through the same city and back again to the early departure city, the process is applied to the delivery systems. This comparison is done using two methods, namely optimization genetic algorithm and hill climbing. Hill Climbing works by directly selecting a new path that is exchanged with the neighbour’s to get the track distance smaller than the previous track, without testing. Genetic algorithms depend on the input parameters, they are the number of population, the probability of crossover, mutation probability and the number of generations. To simplify the process of determining the shortest path supported by the development of software that uses the google map API. Tests carried out as much as 20 times with the number of city 8, 16, 24 and 32 to see which method is optimal in terms of distance and time computation. Based on experiments conducted with a number of cities 3, 4, 5 and 6 producing the same value and optimal distance for the genetic algorithm and hill climbing, the value of this distance begins to differ with the number of city 7. The overall results shows that these tests, hill climbing are more optimal to number of small cities and the number of cities over 30 optimized using genetic algorithms.

  4. Tight Temporal Bounds for Dataflow Applications Mapped onto Shared Resources

    NARCIS (Netherlands)

    Alizadeh Ara, H.; Geilen, M.; Basten, T.; Behrouzian, A.R.B.; Hendriks, M.; Goswami, D.

    2016-01-01

    We present an analysis method that provides tight temporal bounds for applications modeled by Synchronous Dataflow Graphs and mapped to shared resources. We consider the resource sharing effects on the temporal behaviour of the application by embedding worst case resource availability curves in the

  5. A Hierarchical and Distributed Approach for Mapping Large Applications to Heterogeneous Grids using Genetic Algorithms

    Science.gov (United States)

    Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak

    2003-01-01

    In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.

  6. Quantitative Susceptibility Mapping: Contrast Mechanisms and Clinical Applications

    Science.gov (United States)

    Liu, Chunlei; Wei, Hongjiang; Gong, Nan-Jie; Cronin, Matthew; Dibb, Russel; Decker, Kyle

    2016-01-01

    Quantitative susceptibility mapping (QSM) is a recently developed MRI technique for quantifying the spatial distribution of magnetic susceptibility within biological tissues. It first uses the frequency shift in the MRI signal to map the magnetic field profile within the tissue. The resulting field map is then used to determine the spatial distribution of the underlying magnetic susceptibility by solving an inverse problem. The solution is achieved by deconvolving the field map with a dipole field, under the assumption that the magnetic field is a result of the superposition of the dipole fields generated by all voxels and that each voxel has its unique magnetic susceptibility. QSM provides improved contrast to noise ratio for certain tissues and structures compared to its magnitude counterpart. More importantly, magnetic susceptibility is a direct reflection of the molecular composition and cellular architecture of the tissue. Consequently, by quantifying magnetic susceptibility, QSM is becoming a quantitative imaging approach for characterizing normal and pathological tissue properties. This article reviews the mechanism generating susceptibility contrast within tissues and some associated applications. PMID:26844301

  7. Conference on Optimization and Its Applications in Control and Data Science

    CERN Document Server

    2016-01-01

    This book focuses on recent research in modern optimization and its implications in control and data analysis. This book is a collection of papers from the conference “Optimization and Its Applications in Control and Data Science” dedicated to Professor Boris T. Polyak, which was held in Moscow, Russia on May 13-15, 2015. This book reflects developments in theory and applications rooted by Professor Polyak’s fundamental contributions to constrained and unconstrained optimization, differentiable and nonsmooth functions, control theory and approximation. Each paper focuses on techniques for solving complex optimization problems in different application areas and recent developments in optimization theory and methods. Open problems in optimization, game theory and control theory are included in this collection which will interest engineers and researchers working with efficient algorithms and software for solving optimization problems in market and data analysis. Theoreticians in operations research, appli...

  8. Chromhome: a rich internet application for accessing comparative chromosome homology maps.

    Science.gov (United States)

    Nagarajan, Sridevi; Rens, Willem; Stalker, James; Cox, Tony; Ferguson-Smith, Malcolm A

    2008-03-26

    Comparative genomics has become a significant research area in recent years, following the availability of a number of sequenced genomes. The comparison of genomes is of great importance in the analysis of functionally important genome regions. It can also be used to understand the phylogenetic relationships of species and the mechanisms leading to rearrangement of karyotypes during evolution. Many species have been studied at the cytogenetic level by cross species chromosome painting. With the large amount of such information, it has become vital to computerize the data and make them accessible worldwide. Chromhome http://www.chromhome.org is a comprehensive web application that is designed to provide cytogenetic comparisons among species and to fulfil this need. The Chromhome application architecture is multi-tiered with an interactive client layer, business logic and database layers. Enterprise java platform with open source framework OpenLaszlo is used to implement the Rich Internet Chromhome Application. Cross species comparative mapping raw data are collected and the processed information is stored into MySQL Chromhome database. Chromhome Release 1.0 contains 109 homology maps from 51 species. The data cover species from 14 orders and 30 families. The homology map displays all the chromosomes of the compared species as one image, making comparisons among species easier. Inferred data also provides maps of homologous regions that could serve as a guideline for researchers involved in phylogenetic or evolution based studies. Chromhome provides a useful resource for comparative genomics, holding graphical homology maps of a wide range of species. It brings together cytogenetic data of many genomes under one roof. Inferred painting can often determine the chromosomal homologous regions between two species, if each has been compared with a common third species. Inferred painting greatly reduces the need to map entire genomes and helps focus only on relevant

  9. Chromhome: A rich internet application for accessing comparative chromosome homology maps

    Directory of Open Access Journals (Sweden)

    Cox Tony

    2008-03-01

    Full Text Available Abstract Background Comparative genomics has become a significant research area in recent years, following the availability of a number of sequenced genomes. The comparison of genomes is of great importance in the analysis of functionally important genome regions. It can also be used to understand the phylogenetic relationships of species and the mechanisms leading to rearrangement of karyotypes during evolution. Many species have been studied at the cytogenetic level by cross species chromosome painting. With the large amount of such information, it has become vital to computerize the data and make them accessible worldwide. Chromhome http://www.chromhome.org is a comprehensive web application that is designed to provide cytogenetic comparisons among species and to fulfil this need. Results The Chromhome application architecture is multi-tiered with an interactive client layer, business logic and database layers. Enterprise java platform with open source framework OpenLaszlo is used to implement the Rich Internet Chromhome Application. Cross species comparative mapping raw data are collected and the processed information is stored into MySQL Chromhome database. Chromhome Release 1.0 contains 109 homology maps from 51 species. The data cover species from 14 orders and 30 families. The homology map displays all the chromosomes of the compared species as one image, making comparisons among species easier. Inferred data also provides maps of homologous regions that could serve as a guideline for researchers involved in phylogenetic or evolution based studies. Conclusion Chromhome provides a useful resource for comparative genomics, holding graphical homology maps of a wide range of species. It brings together cytogenetic data of many genomes under one roof. Inferred painting can often determine the chromosomal homologous regions between two species, if each has been compared with a common third species. Inferred painting greatly reduces the need to

  10. Application of a concept development process to evaluate process layout designs using value stream mapping and simulation

    Directory of Open Access Journals (Sweden)

    Ki-Young Jeong

    2011-07-01

    Full Text Available Purpose: We propose and demonstrate a concept development process (CDP as a framework to solve a value stream mapping (VSM related process layout design optimization problem.Design/methodology/approach: A case study approach was used to demonstrate the effectiveness of CDP framework in a portable fire extinguisher manufacturing company. To facilitate the CDP application, we proposed the system coupling level index (SCLI and simulation to evaluate the process layout design concepts.Findings: As part of the CDP framework application, three process layout design concepts - current layout (CL, express lane layout (ELL and independent zone layout (IZL - were generated. Then, the SCLI excluded CL and simulation selected IZL as the best concept. The simulation was also applied to optimize the performance of IZL in terms of the number of pallets. Based on this case study, we concluded that CDP framework worked well.Research limitations/implications: The process layout design optimization issue has not been well addressed in the VSM literature. We believe that this paper initiated the relevant discussion by showing the feasibility of CDP as a framework in this issue.Practical implications: The CDP and SCLI are very practice-oriented approaches in the sense that they do not require any complex analytical knowledge.Originality/value: We discussed a not well-addressed issue with a systematic framework. In addition, the SCLI presented was also unique.

  11. Engineering applications of discrete-time optimal control

    DEFF Research Database (Denmark)

    Vidal, Rene Victor Valqui; Ravn, Hans V.

    1990-01-01

    Many problems of design and operation of engineering systems can be formulated as optimal control problems where time has been discretisized. This is also true even if 'time' is not involved in the formulation of the problem, but rather another one-dimensional parameter. This paper gives a review...... of some well-known and new results in discrete time optimal control methods applicable to practical problem solving within engineering. Emphasis is placed on dynamic programming, the classical maximum principle and generalized versions of the maximum principle for optimal control of discrete time systems...

  12. Mapping the Pareto optimal design space for a functionally deimmunized biotherapeutic candidate.

    Science.gov (United States)

    Salvat, Regina S; Parker, Andrew S; Choi, Yoonjoo; Bailey-Kellogg, Chris; Griswold, Karl E

    2015-01-01

    The immunogenicity of biotherapeutics can bottleneck development pipelines and poses a barrier to widespread clinical application. As a result, there is a growing need for improved deimmunization technologies. We have recently described algorithms that simultaneously optimize proteins for both reduced T cell epitope content and high-level function. In silico analysis of this dual objective design space reveals that there is no single global optimum with respect to protein deimmunization. Instead, mutagenic epitope deletion yields a spectrum of designs that exhibit tradeoffs between immunogenic potential and molecular function. The leading edge of this design space is the Pareto frontier, i.e. the undominated variants for which no other single design exhibits better performance in both criteria. Here, the Pareto frontier of a therapeutic enzyme has been designed, constructed, and evaluated experimentally. Various measures of protein performance were found to map a functional sequence space that correlated well with computational predictions. These results represent the first systematic and rigorous assessment of the functional penalty that must be paid for pursuing progressively more deimmunized biotherapeutic candidates. Given this capacity to rapidly assess and design for tradeoffs between protein immunogenicity and functionality, these algorithms may prove useful in augmenting, accelerating, and de-risking experimental deimmunization efforts.

  13. Optimizing Hash-Array Mapped Tries for Fast and Lean Immutable JVM Collections

    NARCIS (Netherlands)

    M.J. Steindorfer (Michael); J.J. Vinju (Jurgen)

    2015-01-01

    textabstractThe data structures under-pinning collection API (e.g. lists, sets, maps) in the standard libraries of programming languages are used intensively in many applications. The standard libraries of recent Java Virtual Machine languages, such as Clojure or Scala, contain scalable and

  14. A new power mapping method based on ordinary kriging and determination of optimal detector location strategy

    International Nuclear Information System (INIS)

    Peng, Xingjie; Wang, Kan; Li, Qing

    2014-01-01

    Highlights: • A new power mapping method based on Ordinary Kriging (OK) is proposed. • Measurements from DayaBay Unit 1 PWR are used to verify the OK method. • The OK method performs better than the CECOR method. • An optimal neutron detector location strategy based on ordinary kriging and simulated annealing is proposed. - Abstract: The Ordinary Kriging (OK) method is presented that is designed for a core power mapping calculation of pressurized water reactors (PWRs). Measurements from DayaBay Unit 1 PWR are used to verify the accuracy of the OK method. The root mean square (RMS) reconstruction errors are kept at less than 0.35%, and the maximum reconstruction relative errors (RE) are kept at less than 1.02% for the entire operating cycle. The reconstructed assembly power distribution results show that the OK method is fit for core power distribution monitoring. The quality of power distribution obtained by the OK method is partly determined by the neutron detector locations, and the OK method is also applied to solve the optimal neutron detector location problem. The spatially averaged ordinary kriging variance (AOKV) is minimized using simulated annealing, and then, the optimal in-core neutron detector locations are obtained. The result shows that the current neutron detector location of DayaBay Unit 1 reactor is near-optimal

  15. A method of network topology optimization design considering application process characteristic

    Science.gov (United States)

    Wang, Chunlin; Huang, Ning; Bai, Yanan; Zhang, Shuo

    2018-03-01

    Communication networks are designed to meet the usage requirements of users for various network applications. The current studies of network topology optimization design mainly considered network traffic, which is the result of network application operation, but not a design element of communication networks. A network application is a procedure of the usage of services by users with some demanded performance requirements, and has obvious process characteristic. In this paper, we first propose a method to optimize the design of communication network topology considering the application process characteristic. Taking the minimum network delay as objective, and the cost of network design and network connective reliability as constraints, an optimization model of network topology design is formulated, and the optimal solution of network topology design is searched by Genetic Algorithm (GA). Furthermore, we investigate the influence of network topology parameter on network delay under the background of multiple process-oriented applications, which can guide the generation of initial population and then improve the efficiency of GA. Numerical simulations show the effectiveness and validity of our proposed method. Network topology optimization design considering applications can improve the reliability of applications, and provide guidance for network builders in the early stage of network design, which is of great significance in engineering practices.

  16. Non-unitary boson mapping and its application to nuclear collective motions

    International Nuclear Information System (INIS)

    Takada, Kenjiro

    2001-01-01

    First, the general theory of boson mapping for even-number many-fermion systems is surveyed. In order to overcome the confusion concerning the so-called unphysical or spurious states in the boson mapping, the correct concept of the unphysical states is precisely given in a clear-cut way. Next, a method to apply the boson mapping to a truncated many-fermion Hilbert space consisting of collective phonons is proposed, by putting special emphasis on the Dyson-type non-unitary boson mapping. On the basis of this method, it becomes possible for the first time to apply the Dyson-type boson mapping to analyses of collective motions in realistic nuclei. This method is also extended to be applicable to odd-number-fermion systems. As known well, the Dyson-type boson mapping is a non-unitary transformation and it gives a non-Hermitian boson Hamiltonian. It is not easy (but not impossible) to solve the eigenstates of the non-Hermitian Hamiltonian. A Hermitian treatment of this non-Hermitian eigenvalue problem is discussed and it is shown that this treatment is a very good approximation. using this Hermitian treatment, we can obtain the normal-ordered Holstein-Primakoff-type boson expansion in the multi-collective-phonon subspace. Thereby the convergence of the boson expansion can be tested. Some examples of application of the Dyson-type non-unitary boson mapping to simplified models and realistic nuclei are also shown, and we can see that it is quite useful for analysis of the collective motions in realistic nuclei. In contrast to the above-mentioned ordinary type of boson mapping, which may be called a a 'static' boson mapping, the Dyson-type non-unitary self-consistent-collective-coordinate method is discussed. The latter is, so to speak, a 'dynamical' boson mapping, which is a dynamical extension of the ordinary boson mapping to be capable to include the coupling effects from the non-collective degrees of freedom self-consistently.Thus all of the Dyson-type non-unitary boson

  17. Optimal control applications in electric power systems

    CERN Document Server

    Christensen, G S; Soliman, S A

    1987-01-01

    Significant advances in the field of optimal control have been made over the past few decades. These advances have been well documented in numerous fine publications, and have motivated a number of innovations in electric power system engineering, but they have not yet been collected in book form. Our purpose in writing this book is to provide a description of some of the applications of optimal control techniques to practical power system problems. The book is designed for advanced undergraduate courses in electric power systems, as well as graduate courses in electrical engineering, applied mathematics, and industrial engineering. It is also intended as a self-study aid for practicing personnel involved in the planning and operation of electric power systems for utilities, manufacturers, and consulting and government regulatory agencies. The book consists of seven chapters. It begins with an introductory chapter that briefly reviews the history of optimal control and its power system applications and also p...

  18. Vector optimization theory, applications, and extensions

    CERN Document Server

    Jahn, Johannes

    2011-01-01

    This new edition of a key monograph has fresh sections on the work of Edgeworth and Pareto in its presentation in a general setting of the fundamentals and important results of vector optimization. It examines background material, applications and theories.

  19. A Combined Algorithm for Optimization: Application for Optimization of the Transition Gas-Liquid in Stirred Tank Bioreactors

    Directory of Open Access Journals (Sweden)

    Mitko Petrov

    2005-12-01

    Full Text Available A combined algorithm for static optimization is developed. The algorithm includes a method for random search of optimal an initial point and a method based on fuzzy sets theory, combined in order to be found for the best solution of the optimization problem. The application of the combined algorithm eliminates the main disadvantage of the used fuzzy optimization method, namely decreases the number of discrete values of control variables. In this way, the algorithm allows problems with larger scale to be solved. The combined algorithm is used for optimization of gas-liquid transition in dependence on some constructive and regime parameters of a laboratory scale stirred tank bioreactor. After the application of developed optimization algorithm significant increase of mass-transfer effectiveness, aeration and mixing processes in the bioreactor are observed.

  20. Application of magnetic resonance to the mapping of cerebral cortex functions

    International Nuclear Information System (INIS)

    Carrero-Gonzalez, B.; Esteban, F.; Fernandez-Valle, M.E.; Santisteban, C.; Ruiz-Cabello, J.; Cortijo, M.

    1996-01-01

    Our aim is to utilize magnetic resonance for mapping brain function. This a recent application of MR that takes advantage of its noninvasive character and higher spatial resolution than other techniques (such as PET,EEG and MEG), which is increasing the vast number of its applications. It is our interest to show a brain map is made, employing conventional methods clinically available with two simple cases very known by this and other techniques. Rapid acquisitions were acquired with a gradient-echo pulse sequence. The analysis of these images was done off-line with IDL-Based home developed software to produce activation maps in visual (through a screen located at certain fixed distance) and motor cortex (Induced by self-paced finger tapping). Our results, with a motor and photic stimulations, reliably produced significant signal increase in the areas of interest. However, many issues are still open; new advances in image analysis, computation and in MR techniques may help to answer these, and broad the number of clinical applications. (Author) 29 refs

  1. Multi-machine power system stabilizers design using chaotic optimization algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Shayeghi, H., E-mail: hshayeghi@gmail.co [Technical Engineering Department, University of Mohaghegh Ardabili, Ardabil (Iran, Islamic Republic of); Shayanfar, H.A. [Center of Excellence for Power System Automation and Operation, Electrical Engineering Department, Iran University of Science and Technology, Tehran (Iran, Islamic Republic of); Jalilzadeh, S.; Safari, A. [Technical Engineering Department, Zanjan University, Zanjan (Iran, Islamic Republic of)

    2010-07-15

    In this paper, a multiobjective design of the multi-machine power system stabilizers (PSSs) using chaotic optimization algorithm (COA) is proposed. Chaotic optimization algorithms, which have the features of easy implementation, short execution time and robust mechanisms of escaping from the local optimum, is a promising tool for the engineering applications. The PSSs parameters tuning problem is converted to an optimization problem which is solved by a chaotic optimization algorithm based on Lozi map. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed chaotic optimization problem introduces chaos mapping using Lozi map chaotic sequences which increases its convergence rate and resulting precision. Two different objective functions are proposed in this study for the PSSs design problem. The first objective function is the eigenvalues based comprising the damping factor, and the damping ratio of the lightly damped electro-mechanical modes, while the second is the time domain-based multi-objective function. The robustness of the proposed COA-based PSSs (COAPSS) is verified on a multi-machine power system under different operating conditions and disturbances. The results of the proposed COAPSS are demonstrated through eigenvalue analysis, nonlinear time-domain simulation and some performance indices. In addition, the potential and superiority of the proposed method over the classical approach and genetic algorithm is demonstrated.

  2. Mathematical biodescriptors of proteomics maps: background and applications.

    Science.gov (United States)

    Basak, Subhash C; Gute, Brian D

    2008-05-01

    This article reviews recent developments in the formulation and application of biodescriptors to characterize proteomics maps. Such biodescriptors can be derived by applying techniques from discrete mathematics (graph theory, linear algebra and information theory). This review focuses on the development of biodescriptors for proteomics maps derived from 2D gel electrophoresis. Preliminary results demonstrated that such descriptors have a reasonable ability to differentiate between proteomics patterns that result from exposure to closely related individual chemicals and complex mixtures, such as the jet fuel JP-8. Further research is required to evaluate the utility of these proteomics-based biodescriptors for drug discovery and predictive toxicology.

  3. Electric power system applications of optimization

    CERN Document Server

    Momoh, James A

    2008-01-01

    Introduction Structure of a Generic Electric Power System  Power System Models  Power System Control Power System Security Assessment  Power System Optimization as a Function of Time  Review of Optimization Techniques Applicable to Power Systems Electric Power System Models  Complex Power Concepts Three-Phase Systems Per Unit Representation  Synchronous Machine Modeling Reactive Capability Limits Prime Movers and Governing Systems  Automatic Gain Control Transmission Subsystems  Y-Bus Incorporating the Transformer Effect  Load Models  Available Transfer Capability  Illustrative Examples  Power

  4. Progressive significance map and its application to error-resilient image transmission.

    Science.gov (United States)

    Hu, Yang; Pearlman, William A; Li, Xin

    2012-07-01

    Set partition coding (SPC) has shown tremendous success in image compression. Despite its popularity, the lack of error resilience remains a significant challenge to the transmission of images in error-prone environments. In this paper, we propose a novel data representation called the progressive significance map (prog-sig-map) for error-resilient SPC. It structures the significance map (sig-map) into two parts: a high-level summation sig-map and a low-level complementary sig-map (comp-sig-map). Such a structured representation of the sig-map allows us to improve its error-resilient property at the price of only a slight sacrifice in compression efficiency. For example, we have found that a fixed-length coding of the comp-sig-map in the prog-sig-map renders 64% of the coded bitstream insensitive to bit errors, compared with 40% with that of the conventional sig-map. Simulation results have shown that the prog-sig-map can achieve highly competitive rate-distortion performance for binary symmetric channels while maintaining low computational complexity. Moreover, we note that prog-sig-map is complementary to existing independent packetization and channel-coding-based error-resilient approaches and readily lends itself to other source coding applications such as distributed video coding.

  5. Advanced Process Control Application and Optimization in Industrial Facilities

    Directory of Open Access Journals (Sweden)

    Howes S.

    2015-01-01

    Full Text Available This paper describes application of the new method and tool for system identification and PID tuning/advanced process control (APC optimization using the new 3G (geometric, gradient, gravity optimization method. It helps to design and implement control schemes directly inside the distributed control system (DCS or programmable logic controller (PLC. Also, the algorithm helps to identify process dynamics in closed-loop mode, optimizes controller parameters, and helps to develop adaptive control and model-based control (MBC. Application of the new 3G algorithm for designing and implementing APC schemes is presented. Optimization of primary and advanced control schemes stabilizes the process and allows the plant to run closer to process, equipment and economic constraints. This increases production rates, minimizes operating costs and improves product quality.

  6. a Laser-Slam Algorithm for Indoor Mobile Mapping

    Science.gov (United States)

    Zhang, Wenjun; Zhang, Qiao; Sun, Kai; Guo, Sheng

    2016-06-01

    A novel Laser-SLAM algorithm is presented for real indoor environment mobile mapping. SLAM algorithm can be divided into two classes, Bayes filter-based and graph optimization-based. The former is often difficult to guarantee consistency and accuracy in largescale environment mapping because of the accumulative error during incremental mapping. Graph optimization-based SLAM method often assume predetermined landmarks, which is difficult to be got in unknown environment mapping. And there most likely has large difference between the optimize result and the real data, because the constraints are too few. This paper designed a kind of sub-map method, which could map more accurately without predetermined landmarks and avoid the already-drawn map impact on agent's location. The tree structure of sub-map can be indexed quickly and reduce the amount of memory consuming when mapping. The algorithm combined Bayes-based and graph optimization-based SLAM algorithm. It created virtual landmarks automatically by associating data of sub-maps for graph optimization. Then graph optimization guaranteed consistency and accuracy in large-scale environment mapping and improved the reasonability and reliability of the optimize results. Experimental results are presented with a laser sensor (UTM 30LX) in official buildings and shopping centres, which prove that the proposed algorithm can obtain 2D maps within 10cm precision in indoor environment range from several hundreds to 12000 square meter.

  7. Evolutionary global optimization, manifolds and applications

    CERN Document Server

    Aguiar e Oliveira Junior, Hime

    2016-01-01

    This book presents powerful techniques for solving global optimization problems on manifolds by means of evolutionary algorithms, and shows in practice how these techniques can be applied to solve real-world problems. It describes recent findings and well-known key facts in general and differential topology, revisiting them all in the context of application to current optimization problems. Special emphasis is put on game theory problems. Here, these problems are reformulated as constrained global optimization tasks and solved with the help of Fuzzy ASA. In addition, more abstract examples, including minimizations of well-known functions, are also included. Although the Fuzzy ASA approach has been chosen as the main optimizing paradigm, the book suggests that other metaheuristic methods could be used as well. Some of them are introduced, together with their advantages and disadvantages. Readers should possess some knowledge of linear algebra, and of basic concepts of numerical analysis and probability theory....

  8. Fluence map segmentation

    International Nuclear Information System (INIS)

    Rosenwald, J.-C.

    2008-01-01

    The lecture addressed the following topics: 'Interpreting' the fluence map; The sequencer; Reasons for difference between desired and actual fluence map; Principle of 'Step and Shoot' segmentation; Large number of solutions for given fluence map; Optimizing 'step and shoot' segmentation; The interdigitation constraint; Main algorithms; Conclusions on segmentation algorithms (static mode); Optimizing intensity levels and monitor units; Sliding window sequencing; Synchronization to avoid the tongue-and-groove effect; Accounting for physical characteristics of MLC; Importance of corrections for leaf transmission and offset; Accounting for MLC mechanical constraints; The 'complexity' factor; Incorporating the sequencing into optimization algorithm; Data transfer to the treatment machine; Interface between R and V and accelerator; and Conclusions on fluence map segmentation (Segmentation is part of the overall inverse planning procedure; 'Step and Shoot' and 'Dynamic' options are available for most TPS (depending on accelerator model; The segmentation phase tends to come into the optimization loop; The physical characteristics of the MLC have a large influence on final dose distribution; The IMRT plans (MU and relative dose distribution) must be carefully validated). (P.A.)

  9. Volumetric B1 (+) mapping of the brain at 7T using DREAM.

    Science.gov (United States)

    Nehrke, Kay; Versluis, Maarten J; Webb, Andrew; Börnert, Peter

    2014-01-01

    To tailor and optimize the Dual Refocusing Echo Acquisition Mode (DREAM) approach for volumetric B1 (+) mapping of the brain at 7T. A new DREAM echo timing scheme based on the virtual stimulated echo was derived to minimize potential effects of transverse relaxation. Furthermore, the DREAM B1 (+) mapping performance was investigated in simulations and experimentally in phantoms and volunteers for volumetric applications, studying and optimizing the accuracy of the sequence with respect to saturation effects, slice profile imperfections, and T1 and T2 relaxation. Volumetric brain protocols were compiled for different isotropic resolutions (5-2.5 mm) and SENSE factors, and were studied in vivo for different RF drive modes (circular/linear polarization) and the application of dielectric pads. Volumetric B1 (+) maps with good SNR at 2.5 mm isotropic resolution were acquired in about 20 s or less. The specific absorption rate was well below the safety limits for all scans. Mild flow artefacts were observed in the large vessels. Moreover, a slight contrast in the ventricle was observed in the B1 (+) maps, which could be attributed to T1 and T2 relaxation effects. DREAM enables safe, very fast, and robust volumetric B1 (+) mapping of the brain at ultrahigh fields. Copyright © 2013 Wiley Periodicals, Inc.

  10. Optimized preparation of urine samples for two-dimensional electrophoresis and initial application to patient samples

    DEFF Research Database (Denmark)

    Lafitte, Daniel; Dussol, Bertrand; Andersen, Søren

    2002-01-01

    OBJECTIVE: We optimized of the preparation of urinary samples to obtain a comprehensive map of urinary proteins of healthy subjects and then compared this map with the ones obtained with patient samples to show that the pattern was specific of their kidney disease. DESIGN AND METHODS: The urinary...

  11. Can the Future EnMAP Mission Contribute to Urban Applications? A Literature Survey

    Directory of Open Access Journals (Sweden)

    Andreas Müller

    2011-08-01

    Full Text Available With urban populations and their footprints growing globally, the need to assess the dynamics of the urban environment increases. Remote sensing is one approach that can analyze these developments quantitatively with respect to spatially and temporally large scale changes. With the 2015 launch of the spaceborne EnMAP mission, a new hyperspectral sensor with high signal-to-noise ratio at medium spatial resolution, and a 21 day global revisit capability will become available. This paper presents the results of a literature survey on existing applications and image analysis techniques in the context of urban remote sensing in order to identify and outline potential contributions of the future EnMAP mission. Regarding urban applications, four frequently addressed topics have been identified: urban development and planning, urban growth assessment, risk and vulnerability assessment and urban climate. The requirements of four application fields and associated image processing techniques used to retrieve desired parameters and create geo-information products have been reviewed. As a result, we identified promising research directions enabling the use of EnMAP for urban studies. First and foremost, research is required to analyze the spectral information content of an EnMAP pixel used to support material-based land cover mapping approaches. This information can subsequently be used to improve urban indicators, such as imperviousness. Second, we identified the global monitoring of urban areas as a promising field of investigation taking advantage of EnMAP’s spatial coverage and revisit capability. However, owing to the limitations of EnMAPs spatial resolution for urban applications, research should also focus on hyperspectral resolution enhancement to enable retrieving material information on sub-pixel level.

  12. Mapping embedded applications on MPSoCs : the MNEMEE approach

    NARCIS (Netherlands)

    Baloukas, C.; Papadopoulos, L.; Soudris, D.; Stuijk, S.; Jovanovic, O.; Schmoll, F.; Cordes, D.; Pyka, A.; Mallik, A.; Mamagkakis, S.; Capman, F.; Collet, S.; Mitas, N.; Kritharidis, D.

    2010-01-01

    As embedded systems are becoming the center of our digital life, system design becomes progressively harder. The integration of multiple features on devices with limited resources requires careful and exhaustive exploration of the design search space in order to efficiently map modern applications

  13. Optimization of Multipurpose Reservoir Operation with Application Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Elahe Fallah Mehdipour

    2012-12-01

    Full Text Available Optimal operation of multipurpose reservoirs is one of the complex and sometimes nonlinear problems in the field of multi-objective optimization. Evolutionary algorithms are optimization tools that search decision space using simulation of natural biological evolution and present a set of points as the optimum solutions of problem. In this research, application of multi-objective particle swarm optimization (MOPSO in optimal operation of Bazoft reservoir with different objectives, including generating hydropower energy, supplying downstream demands (drinking, industry and agriculture, recreation and flood control have been considered. In this regard, solution sets of the MOPSO algorithm in bi-combination of objectives and compromise programming (CP using different weighting and power coefficients have been first compared that the MOPSO algorithm in all combinations of objectives is more capable than the CP to find solution with appropriate distribution and these solutions have dominated the CP solutions. Then, ending points of solution set from the MOPSO algorithm and nonlinear programming (NLP results have been compared. Results showed that the MOPSO algorithm with 0.3 percent difference from the NLP results has more capability to present optimum solutions in the ending points of solution set.

  14. VOLCWORKS: A suite for optimization of hazards mapping

    Science.gov (United States)

    Delgado Granados, H.; Ramírez Guzmán, R.; Villareal Benítez, J. L.; García Sánchez, T.

    2012-04-01

    Making hazards maps is a process linking basic science, applied science and engineering for the benefit of the society. The methodologies for hazards maps' construction have evolved enormously together with the tools that allow the forecasting of the behavior of the materials produced by different eruptive processes. However, in spite of the development of tools and evolution of methodologies, the utility of hazards maps has not changed: prevention and mitigation of volcanic disasters. Integration of different tools for simulation of different processes for a single volcano is a challenge to be solved using software tools including processing, simulation and visualization techniques, and data structures in order to build up a suit that helps in the construction process starting from the integration of the geological data, simulations and simplification of the output to design a hazards/scenario map. Scientific visualization is a powerful tool to explore and gain insight into complex data from instruments and simulations. The workflow from data collection, quality control and preparation for simulations, to achieve visual and appropriate presentation is a process that is usually disconnected, using in most of the cases different applications for each of the needed processes, because it requires many tools that are not built for the solution of a specific problem, or were developed by research groups to solve particular tasks, but disconnected. In volcanology, due to its complexity, groups typically examine only one aspect of the phenomenon: ash dispersal, laharic flows, pyroclastic flows, lava flows, and ballistic projectile ejection, among others. However, when studying the hazards associated to the activity of a volcano, it is important to analyze all the processes comprehensively, especially for communication of results to the end users: decision makers and planners. In order to solve this problem and connect different parts of a workflow we are developing the

  15. Generating Multi-Destination Maps.

    Science.gov (United States)

    Zhang, Junsong; Fan, Jiepeng; Luo, Zhenshan

    2017-08-01

    Multi-destination maps are a kind of navigation maps aimed to guide visitors to multiple destinations within a region, which can be of great help to urban visitors. However, they have not been developed in the current online map service. To address this issue, we introduce a novel layout model designed especially for generating multi-destination maps, which considers the global and local layout of a multi-destination map. We model the layout problem as a graph drawing that satisfies a set of hard and soft constraints. In the global layout phase, we balance the scale factor between ROIs. In the local layout phase, we make all edges have good visibility and optimize the map layout to preserve the relative length and angle of roads. We also propose a perturbation-based optimization method to find an optimal layout in the complex solution space. The multi-destination maps generated by our system are potential feasible on the modern mobile devices and our result can show an overview and a detail view of the whole map at the same time. In addition, we perform a user study to evaluate the effectiveness of our method, and the results prove that the multi-destination maps achieve our goals well.

  16. Optimization Foundations and Applications

    CERN Document Server

    Miller, H Ronald

    2011-01-01

    A thorough and highly accessible resource for analysts in a broad range of social sciences. Optimization: Foundations and Applications presents a series of approaches to the challenges faced by analysts who must find the best way to accomplish particular objectives, usually with the added complication of constraints on the available choices. Award-winning educator Ronald E. Miller provides detailed coverage of both classical, calculus-based approaches and newer, computer-based iterative methods. Dr. Miller lays a solid foundation for both linear and nonlinear models and quickly moves on to dis

  17. Application of GIS Rapid Mapping Technology in Disaster Monitoring

    Science.gov (United States)

    Wang, Z.; Tu, J.; Liu, G.; Zhao, Q.

    2018-04-01

    With the rapid development of GIS and RS technology, especially in recent years, GIS technology and its software functions have been increasingly mature and enhanced. And with the rapid development of mathematical statistical tools for spatial modeling and simulation, has promoted the widespread application and popularization of quantization in the field of geology. Based on the investigation of field disaster and the construction of spatial database, this paper uses remote sensing image, DEM and GIS technology to obtain the data information of disaster vulnerability analysis, and makes use of the information model to carry out disaster risk assessment mapping.Using ArcGIS software and its spatial data modeling method, the basic data information of the disaster risk mapping process was acquired and processed, and the spatial data simulation tool was used to map the disaster rapidly.

  18. Traffic Flow Optimization Using a Quantum Annealer

    Directory of Open Access Journals (Sweden)

    Florian Neukart

    2017-12-01

    Full Text Available Quantum annealing algorithms belong to the class of metaheuristic tools, applicable for solving binary optimization problems. Hardware implementations of quantum annealing, such as the quantum processing units (QPUs produced by D-Wave Systems, have been subject to multiple analyses in research, with the aim of characterizing the technology’s usefulness for optimization and sampling tasks. In this paper, we present a real-world application that uses quantum technologies. Specifically, we show how to map certain parts of a real-world traffic flow optimization problem to be suitable for quantum annealing. We show that time-critical optimization tasks, such as continuous redistribution of position data for cars in dense road networks, are suitable candidates for quantum computing. Due to the limited size and connectivity of current-generation D-Wave QPUs, we use a hybrid quantum and classical approach to solve the traffic flow problem.

  19. Optimization of application execution in the GridSpace environment

    NARCIS (Netherlands)

    Malawski, M.; Kocot, J.; Ryszka, I.; Bubak, M.; Wieczorek, M.; Fahringer, T.

    2008-01-01

    This paper describes an approach to optimization of execution of applications in the GridSpace environment. In this environment operations are invoked on special objects which reside on Grid resources what requires a specific approach to optimization of execution. This approach is implemented in the

  20. Developing a computationally efficient dynamic multilevel hybrid optimization scheme using multifidelity model interactions.

    Energy Technology Data Exchange (ETDEWEB)

    Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr. (; .); Giunta, Anthony Andrew

    2006-01-01

    Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and

  1. Recent advances in the applications of vibrational spectroscopic imaging and mapping to pharmaceutical formulations

    Science.gov (United States)

    Ewing, Andrew V.; Kazarian, Sergei G.

    2018-05-01

    Vibrational spectroscopic imaging and mapping approaches have continued in their development and applications for the analysis of pharmaceutical formulations. Obtaining spatially resolved chemical information about the distribution of different components within pharmaceutical formulations is integral for improving the understanding and quality of final drug products. This review aims to summarise some key advances of these technologies over recent years, primarily since 2010. An overview of FTIR, NIR, terahertz spectroscopic imaging and Raman mapping will be presented to give a perspective of the current state-of-the-art of these techniques for studying pharmaceutical samples. This will include their application to reveal spatial information of components that reveals molecular insight of polymorphic or structural changes, behaviour of formulations during dissolution experiments, uniformity of materials and detection of counterfeit products. Furthermore, new advancements will be presented that demonstrate the continuing novel applications of spectroscopic imaging and mapping, namely in FTIR spectroscopy, for studies of microfluidic devices. Whilst much of the recently developed work has been reported by academic groups, examples of the potential impacts of utilising these imaging and mapping technologies to support industrial applications have also been reviewed.

  2. Optimization of Partitioned Architectures to Support Soft Real-Time Applications

    DEFF Research Database (Denmark)

    Tamas-Selicean, Domitian; Pop, Paul

    2014-01-01

    In this paper we propose a new Tabu Search-based design optimization strategy for mixed-criticality systems implementing hard and soft real-time applications on the same platform. Our proposed strategy determined an implementation such that all hard real-time applications are schedulable and the ......In this paper we propose a new Tabu Search-based design optimization strategy for mixed-criticality systems implementing hard and soft real-time applications on the same platform. Our proposed strategy determined an implementation such that all hard real-time applications are schedulable...... and the quality of service of the soft real-time tasks is maximized. We have evaluated our strategy using an aerospace case study....

  3. Optimization of Organic Rankine Cycles for Off-Shore Applications

    DEFF Research Database (Denmark)

    Pierobon, Leonardo; Larsen, Ulrik; Nguyen, Tuong-Van

    2013-01-01

    and the thermal efficiency of the cycle can be maximized. This paper is aimed at finding the most optimal ORC tailored for off-shore applications using an optimization procedure based on the genetic algorithm. Numerous working fluids are screened through, considering mainly thermal efficiency, but also other...

  4. Application of genetic algorithm with genetic modification and quality map in production strategy optimization; Aplicacao de algoritmo genetico com modificacao genetica e mapa de qualidade na otimizacao de estrategia de producao

    Energy Technology Data Exchange (ETDEWEB)

    Nakajima, Lincoln; Maschio, Celio; Schiozer, Denis J. [Universidade Estadual de Campinas (UNICAMP), SP (Brazil). Faculdade de Engenharia Mecanica. Dept. de Engenharia de Petroleo

    2008-07-01

    The definition of position and number of wells is the most important stage on production strategy selection, since it will affect the reservoir behavior, which influences future decisions. However this process is time-consuming and it is often a trial-and-error approach. Many studies have been made in order to reduce the engineer's effort in this stage, by minimizing the number of simulation runs through proxy models or by automating the whole process, using some optimization algorithm. This work proposes a methodology that integrates genetic algorithm and quality map to automate the production strategy optimization. It is also introduced the concept of genetic modification, which is the procedure to update the quality map according to the wells production of each evaluated strategy. The objective is to improve the evolutionary process, allowing the evaluation of more promising alternatives, improving the chance of obtaining better solutions without a substantial increase in the number of simulations. (author)

  5. Projector primary-based optimization for superimposed projection mappings

    Science.gov (United States)

    Ahmed, Bilal; Lee, Jong Hun; Lee, Yong Yi; Lee, Kwan H.

    2018-01-01

    Recently, many researchers have focused on fully overlapping projections for three-dimensional (3-D) projection mapping systems but reproducing a high-quality appearance using this technology still remains a challenge. On top of existing color compensation-based methods, much effort is still required to faithfully reproduce an appearance that is free from artifacts, colorimetric inconsistencies, and inappropriate illuminance over the 3-D projection surface. According to our observation, this is due to the fact that overlapping projections are treated as an additive-linear mixture of color. However, this is not the case according to our elaborated observations. We propose a method that enables us to use high-quality appearance data that are measured from original objects and regenerate the same appearance by projecting optimized images using multiple projectors, ensuring that the projection-rendered results look visually close to the real object. We prepare our target appearances by photographing original objects. Then, using calibrated projector-camera pairs, we compensate for missing geometric correspondences to make our method robust against noise. The heart of our method is a target appearance-driven adaptive sampling of the projection surface followed by a representation of overlapping projections in terms of the projector-primary response. This gives off projector-primary weights to facilitate blending and the system is applied with constraints. These samples are used to populate a light transport-based system. Then, the system is solved minimizing the error to get the projection images in a noise-free manner by utilizing intersample overlaps. We ensure that we make the best utilization of available hardware resources to recreate projection mapped appearances that look as close to the original object as possible. Our experimental results show compelling results in terms of visual similarity and colorimetric error.

  6. Surface registration technique for close-range mapping applications

    Science.gov (United States)

    Habib, Ayman F.; Cheng, Rita W. T.

    2006-08-01

    Close-range mapping applications such as cultural heritage restoration, virtual reality modeling for the entertainment industry, and anatomical feature recognition for medical activities require 3D data that is usually acquired by high resolution close-range laser scanners. Since these datasets are typically captured from different viewpoints and/or at different times, accurate registration is a crucial procedure for 3D modeling of mapped objects. Several registration techniques are available that work directly with the raw laser points or with extracted features from the point cloud. Some examples include the commonly known Iterative Closest Point (ICP) algorithm and a recently proposed technique based on matching spin-images. This research focuses on developing a surface matching algorithm that is based on the Modified Iterated Hough Transform (MIHT) and ICP to register 3D data. The proposed algorithm works directly with the raw 3D laser points and does not assume point-to-point correspondence between two laser scans. The algorithm can simultaneously establish correspondence between two surfaces and estimates the transformation parameters relating them. Experiment with two partially overlapping laser scans of a small object is performed with the proposed algorithm and shows successful registration. A high quality of fit between the two scans is achieved and improvement is found when compared to the results obtained using the spin-image technique. The results demonstrate the feasibility of the proposed algorithm for registering 3D laser scanning data in close-range mapping applications to help with the generation of complete 3D models.

  7. Simultaneous beam geometry and intensity map optimization in intensity-modulated radiation therapy

    International Nuclear Information System (INIS)

    Lee, Eva K.; Fox, Tim; Crocker, Ian

    2006-01-01

    Purpose: In current intensity-modulated radiation therapy (IMRT) plan optimization, the focus is on either finding optimal beam angles (or other beam delivery parameters such as field segments, couch angles, gantry angles) or optimal beam intensities. In this article we offer a mixed integer programming (MIP) approach for simultaneously determining an optimal intensity map and optimal beam angles for IMRT delivery. Using this approach, we pursue an experimental study designed to (a) gauge differences in plan quality metrics with respect to different tumor sites and different MIP treatment planning models, and (b) test the concept of critical-normal-tissue-ring-a tissue ring of 5 mm thickness drawn around the planning target volume (PTV)-and its use for designing conformal plans. Methods and Materials: Our treatment planning models use two classes of decision variables to capture the beam configuration and intensities simultaneously. Binary (0/1) variables are used to capture 'on' or 'off' or 'yes' or 'no' decisions for each field, and nonnegative continuous variables are used to represent intensities of beamlets. Binary and continuous variables are also used for each voxel to capture dose level and dose deviation from target bounds. Treatment planning models were designed to explicitly incorporate the following planning constraints: (a) upper/lower/mean dose-based constraints, (b) dose-volume and equivalent-uniform-dose (EUD) constraints for critical structures, (c) homogeneity constraints (underdose/overdose) for PTV, (d) coverage constraints for PTV, and (e) maximum number of beams allowed. Within this constrained solution space, five optimization strategies involving clinical objectives were analyzed: optimize total intensity to PTV, optimize total intensity and then optimize conformity, optimize total intensity and then optimize homogeneity, minimize total dose to critical structures, minimize total dose to critical structures and optimize conformity

  8. Optimal design of tilt carrier frequency computer-generated holograms to measure aspherics.

    Science.gov (United States)

    Peng, Jiantao; Chen, Zhe; Zhang, Xingxiang; Fu, Tianjiao; Ren, Jianyue

    2015-08-20

    Computer-generated holograms (CGHs) provide an approach to high-precision metrology of aspherics. A CGH is designed under the trade-off among size, mapping distortion, and line spacing. This paper describes an optimal design method based on the parametric model for tilt carrier frequency CGHs placed outside the interferometer focus points. Under the condition of retaining an admissible size and a tolerable mapping distortion, the optimal design method has two advantages: (1) separating the parasitic diffraction orders to improve the contrast of the interferograms and (2) achieving the largest line spacing to minimize sensitivity to fabrication errors. This optimal design method is applicable to common concave aspherical surfaces and illustrated with CGH design examples.

  9. Adaptation of the MapMan ontology to biotic stress responses: application in solanaceous species

    Directory of Open Access Journals (Sweden)

    Stitt Mark

    2007-09-01

    Full Text Available Abstract Background The results of transcriptome microarray analysis are usually presented as a list of differentially expressed genes. As these lists can be long, it is hard to interpret the desired experimental treatment effect on the physiology of analysed tissue, e.g. via selected metabolic or other pathways. For some organisms, gene ontologies and data visualization software have been implemented to overcome this problem, whereas for others, software adaptation is yet to be done. Results We present the classification of tentative potato contigs from the potato gene index (StGI available from Dana-Farber Cancer Institute (DFCI into the MapMan ontology to enable the application of the MapMan family of tools to potato microarrays. Special attention has been focused on mapping genes that could not be annotated based on similarity to Arabidopsis genes alone, thus possibly representing genes unique for potato. 97 such genes were classified into functional BINs (i.e. functional classes after manual annotation. A new pathway, focusing on biotic stress responses, has been added and can be used for all other organisms for which mappings have been done. The BIN representation on the potato 10 k cDNA microarray, in comparison with all putative potato gene sequences, has been tested. The functionality of the prepared potato mapping was validated with experimental data on plant response to viral infection. In total 43,408 unigenes were mapped into 35 corresponding BINs. Conclusion The potato mappings can be used to visualize up-to-date, publicly available, expressed sequence tags (ESTs and other sequences from GenBank, in combination with metabolic pathways. Further expert work on potato annotations will be needed with the ongoing EST and genome sequencing of potato. The current MapMan application for potato is directly applicable for analysis of data obtained on potato 10 k cDNA microarray by TIGR (The Institute for Genomic Research but can also be used

  10. An application of Geographic Information System in mapping flood ...

    African Journals Online (AJOL)

    Roland

    1Department of Geography, Benue State University, Makurdi, Benue State, Nigeria. 2National Agency for the Control of AIDS (NACA), Central Area, Abuja, Nigeria. Accepted 20 May, 2013. This study deals with the application of Geographic Information Systems (GIS) in mapping flood risk zones in Makurdi Town. This study ...

  11. Design Optimization of Multi-Cluster Embedded Systems for Real-Time Applications

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2004-01-01

    We present an approach to design optimization of multi-cluster embedded systems consisting of time-triggered and event-triggered clusters, interconnected via gateways. In this paper, we address design problems which are characteristic to multi-clusters: partitioning of the system functionality...... into time-triggered and event-triggered domains, process mapping, and the optimization of parameters corresponding to the communication protocol. We present several heuristics for solving these problems. Our heuristics are able to find schedulable implementations under limited resources, achieving...... an efficient utilization of the system. The developed algorithms are evaluated using extensive experiments and a real-life example....

  12. Design Optimization of Multi-Cluster Embedded Systems for Real-Time Applications

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2006-01-01

    We present an approach to design optimization of multi-cluster embedded systems consisting of time-triggered and event-triggered clusters, interconnected via gateways. In this paper, we address design problems which are characteristic to multi-clusters: partitioning of the system functionality...... into time-triggered and event-triggered domains, process mapping, and the optimization of parameters corresponding to the communication protocol. We present several heuristics for solving these problems. Our heuristics are able to find schedulable implementations under limited resources, achieving...... an efficient utilization of the system. The developed algorithms are evaluated using extensive experiments and a real-life example....

  13. Heuristic query optimization for query multiple table and multiple clausa on mobile finance application

    Science.gov (United States)

    Indrayana, I. N. E.; P, N. M. Wirasyanti D.; Sudiartha, I. KG

    2018-01-01

    Mobile application allow many users to access data from the application without being limited to space, space and time. Over time the data population of this application will increase. Data access time will cause problems if the data record has reached tens of thousands to millions of records.The objective of this research is to maintain the performance of data execution for large data records. One effort to maintain data access time performance is to apply query optimization method. The optimization used in this research is query heuristic optimization method. The built application is a mobile-based financial application using MySQL database with stored procedure therein. This application is used by more than one business entity in one database, thus enabling rapid data growth. In this stored procedure there is an optimized query using heuristic method. Query optimization is performed on a “Select” query that involves more than one table with multiple clausa. Evaluation is done by calculating the average access time using optimized and unoptimized queries. Access time calculation is also performed on the increase of population data in the database. The evaluation results shown the time of data execution with query heuristic optimization relatively faster than data execution time without using query optimization.

  14. Applications of intelligent optimization in biology and medicine current trends and open problems

    CERN Document Server

    Grosan, Crina; Tolba, Mohamed

    2016-01-01

    This volume provides updated, in-depth material on the application of intelligent optimization in biology and medicine. The aim of the book is to present solutions to the challenges and problems facing biology and medicine applications. This Volume comprises of 13 chapters, including an overview chapter, providing an up-to-date and state-of-the research on the application of intelligent optimization for bioinformatics applications, DNA based Steganography, a modified Particle Swarm Optimization Algorithm for Solving Capacitated Maximal Covering Location Problem in Healthcare Systems, Optimization Methods for Medical Image Super Resolution Reconstruction and breast cancer classification. Moreover, some chapters that describe several bio-inspired approaches in MEDLINE Text Mining, DNA-Binding Proteins and Classes, Optimized Tumor Breast Cancer Classification using Combining Random Subspace and Static Classifiers Selection Paradigms, and Dental Image Registration. The book will be a useful compendium for a broad...

  15. A particle swarm optimized kernel-based clustering method for crop mapping from multi-temporal polarimetric L-band SAR observations

    Science.gov (United States)

    Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza

    2017-06-01

    Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.

  16. The Arctic Observing Viewer: A Web-mapping Application for U.S. Arctic Observing Activities

    Science.gov (United States)

    Cody, R. P.; Manley, W. F.; Gaylord, A. G.; Kassin, A.; Villarreal, S.; Barba, M.; Dover, M.; Escarzaga, S. M.; Habermann, T.; Kozimor, J.; Score, R.; Tweedie, C. E.

    2015-12-01

    Although a great deal of progress has been made with various arctic observing efforts, it can be difficult to assess such progress when so many agencies, organizations, research groups and others are making such rapid progress over such a large expanse of the Arctic. To help meet the strategic needs of the U.S. SEARCH-AON program and facilitate the development of SAON and other related initiatives, the Arctic Observing Viewer (AOV; http://ArcticObservingViewer.org) has been developed. This web mapping application compiles detailed information pertaining to U.S. Arctic Observing efforts. Contributing partners include the U.S. NSF, USGS, ACADIS, ADIwg, AOOS, a2dc, AON, ARMAP, BAID, IASOA, INTERACT, and others. Over 7700 observation sites are currently in the AOV database and the application allows users to visualize, navigate, select, advance search, draw, print, and more. During 2015, the web mapping application has been enhanced by the addition of a query builder that allows users to create rich and complex queries. AOV is founded on principles of software and data interoperability and includes an emerging "Project" metadata standard, which uses ISO 19115-1 and compatible web services. Substantial efforts have focused on maintaining and centralizing all database information. In order to keep up with emerging technologies, the AOV data set has been structured and centralized within a relational database and the application front-end has been ported to HTML5 to enable mobile access. Other application enhancements include an embedded Apache Solr search platform which provides users with the capability to perform advance searches and an administration web based data management system that allows administrators to add, update, and delete information in real time. We encourage all collaborators to use AOV tools and services for their own purposes and to help us extend the impact of our efforts and ensure AOV complements other cyber-resources. Reinforcing dispersed but

  17. Systemic Analysis, Mapping, Modeling, and Simulation of the Advanced Accelerator Applications Program

    International Nuclear Information System (INIS)

    Guan, Yue; Laidler, James J.; Morman, James A.

    2002-01-01

    Advanced chemical separations methods envisioned for use in the Department of Energy Advanced Accelerator Applications (AAA) program have been studied using the Systemic Analysis, Mapping, Modeling, and Simulation (SAMMS) method. This integrated and systematic method considers all aspects of the studied process as one dynamic and inter-dependent system. This particular study focuses on two subjects: the chemical separation processes for treating spent nuclear fuel, and the associated non-proliferation implications of such processing. Two levels of chemical separation models are developed: level 1 models treat the chemical process stages by groups; and level 2 models depict the details of each process stage. Models to estimate the proliferation risks based on proliferation barrier assessment are also developed. This paper describes the research conducted for the single-stratum design in the AAA program. Further research conducted for the multi-strata designs will be presented later. The method and models described in this paper can help in the design of optimized processes that fulfill the chemical separation process specifications and non-proliferation requirements. (authors)

  18. Linearly Recurrent Circle Map Subshifts and an Application to Schrödinger Operators

    CERN Document Server

    Adamczewski, B

    2001-01-01

    We discuss circle map sequences and subshifts generated by them. We give a characterization of those sequences among them which are linearly recurrent. As an application we deduce zero-measure spectrum for a class of discrete one-dimensional Schrödinger operators with potentials generated by circle maps.

  19. MAPPING THE GALAXY COLOR–REDSHIFT RELATION: OPTIMAL PHOTOMETRIC REDSHIFT CALIBRATION STRATEGIES FOR COSMOLOGY SURVEYS

    Energy Technology Data Exchange (ETDEWEB)

    Masters, Daniel; Steinhardt, Charles; Faisst, Andreas [Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA 91125 (United States); Capak, Peter [Spitzer Science Center, California Institute of Technology, Pasadena, CA 91125 (United States); Stern, Daniel; Rhodes, Jason [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 (United States); Ilbert, Olivier [Aix Marseille Universite, CNRS, LAM (Laboratoire dAstrophysique de Marseille) UMR 7326, F-13388, Marseille (France); Salvato, Mara [Max-Planck-Institut für extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching (Germany); Schmidt, Samuel [Department of Physics, University of California, Davis, CA 95616 (United States); Longo, Giuseppe [Department of Physics, University Federico II, via Cinthia 6, I-80126 Napoli (Italy); Paltani, Stephane; Coupon, Jean [Department of Astronomy, University of Geneva ch. dcogia 16, CH-1290 Versoix (Switzerland); Mobasher, Bahram [Department of Physics and Astronomy, University of California, Riverside, CA 92521 (United States); Hoekstra, Henk [Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA, Leiden (Netherlands); Hildebrandt, Hendrik [Argelander-Institut für Astronomie, Universität Bonn, Auf dem H’´ugel 71, D-53121 Bonn (Germany); Speagle, Josh [Department of Astronomy, Harvard University, 60 Garden Street, MS 46, Cambridge, MA 02138 (United States); Kalinich, Adam [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Brodwin, Mark [Department of Physics and Astronomy, University of Missouri, Kansas City, MO 64110 (United States); Brescia, Massimo; Cavuoti, Stefano [Astronomical Observatory of Capodimonte—INAF, via Moiariello 16, I-80131, Napoli (Italy)

    2015-11-01

    Calibrating the photometric redshifts of ≳10{sup 9} galaxies for upcoming weak lensing cosmology experiments is a major challenge for the astrophysics community. The path to obtaining the required spectroscopic redshifts for training and calibration is daunting, given the anticipated depths of the surveys and the difficulty in obtaining secure redshifts for some faint galaxy populations. Here we present an analysis of the problem based on the self-organizing map, a method of mapping the distribution of data in a high-dimensional space and projecting it onto a lower-dimensional representation. We apply this method to existing photometric data from the COSMOS survey selected to approximate the anticipated Euclid weak lensing sample, enabling us to robustly map the empirical distribution of galaxies in the multidimensional color space defined by the expected Euclid filters. Mapping this multicolor distribution lets us determine where—in galaxy color space—redshifts from current spectroscopic surveys exist and where they are systematically missing. Crucially, the method lets us determine whether a spectroscopic training sample is representative of the full photometric space occupied by the galaxies in a survey. We explore optimal sampling techniques and estimate the additional spectroscopy needed to map out the color–redshift relation, finding that sampling the galaxy distribution in color space in a systematic way can efficiently meet the calibration requirements. While the analysis presented here focuses on the Euclid survey, similar analysis can be applied to other surveys facing the same calibration challenge, such as DES, LSST, and WFIRST.

  20. Multi-objective evolutionary optimization for constructing neural networks for virtual reality visual data mining: application to geophysical prospecting.

    Science.gov (United States)

    Valdés, Julio J; Barton, Alan J

    2007-05-01

    A method for the construction of virtual reality spaces for visual data mining using multi-objective optimization with genetic algorithms on nonlinear discriminant (NDA) neural networks is presented. Two neural network layers (the output and the last hidden) are used for the construction of simultaneous solutions for: (i) a supervised classification of data patterns and (ii) an unsupervised similarity structure preservation between the original data matrix and its image in the new space. A set of spaces are constructed from selected solutions along the Pareto front. This strategy represents a conceptual improvement over spaces computed by single-objective optimization. In addition, genetic programming (in particular gene expression programming) is used for finding analytic representations of the complex mappings generating the spaces (a composition of NDA and orthogonal principal components). The presented approach is domain independent and is illustrated via application to the geophysical prospecting of caves.

  1. SAR China Land Mapping Project: Development, Production and Potential Applications

    International Nuclear Information System (INIS)

    Zhang, Lu; Guo, Huadong; Liu, Guang; Fu, Wenxue; Yan, Shiyong; Song, Rui; Ji, Peng; Wang, Xinyuan

    2014-01-01

    Large-area, seamless synthetic aperture radar (SAR) mosaics can reflect overall environmental conditions and highlight general trends in observed areas from a macroscopic standpoint, and effectively support research at the global scale, which is in high demand now across scientific fields. The SAR China Land Mapping Project (SCLM), supported by the Digital Earth Science Platform Project initiated and managed by the Center for Earth Observation and Digital Earth, Chinese Academy of Sciences (CEODE), is introduced in this paper. This project produced a large-area SAR mosaic dataset and generated the first complete seamless SAR map covering the entire land area of China using EnviSat-ASAR images. The value of the mosaic map is demonstrated by some potential applications in studies of urban distribution, rivers and lakes, geologic structures, geomorphology and paleoenvironmental change

  2. An Optimization Approach to Improving Collections of Shape Maps

    DEFF Research Database (Denmark)

    Nguyen, Andy; Ben‐Chen, Mirela; Welnicka, Katarzyna

    2011-01-01

    pairwise map independently does not take full advantage of all existing information. For example, a notorious problem with computing shape maps is the ambiguity introduced by the symmetry problem — for two similar shapes which have reflectional symmetry there exist two maps which are equally favorable...... shape maps connecting our collection, we propose to add the constraint of global map consistency, requiring that any composition of maps between two shapes should be independent of the path chosen in the network. This requirement can help us choose among the equally good symmetric alternatives, or help...

  3. An Intracranial Electroencephalography (iEEG Brain Function Mapping Tool with an Application to Epilepsy Surgery Evaluation

    Directory of Open Access Journals (Sweden)

    Yinghua eWang

    2016-04-01

    Full Text Available Object: Before epilepsy surgeries, intracranial electroencephalography (iEEG is often employed in function mapping and epileptogenic foci localization. Although the implanted electrodes provide crucial information for epileptogenic zone resection, a convenient clinical tool for electrode position registration and brain function mapping visualization is still lacking. In this study, we developed a Brain Function Mapping (BFM Tool, which facilitates electrode position registration and brain function mapping visualization, with an application to epilepsy surgeries.Methods: The BFM Tool mainly utilizes electrode location registration and function mapping based on pre-defined brain models from other software. In addition, the electrode node and mapping properties, such as the node size/color, edge color / thickness, mapping method, can be adjusted easily using the setting panel. Moreover, users may manually import / export location and connectivity data to generate figures for further application. The role of this software is demonstrated by a clinical study of language area localization.Results: The BFM Tool helps clinical doctors and researchers visualize implanted electrodes and brain functions in an easy, quick and flexible manner.Conclusions: Our tool provides convenient electrode registration, easy brain function visualization, and has good performance. It is clinical-oriented and is easy to deploy and use. The BFM tool is suitable for epilepsy and other clinical iEEG applications.

  4. A new optimization algotithm with application to nonlinear MPC

    Directory of Open Access Journals (Sweden)

    Frode Martinsen

    2005-01-01

    Full Text Available This paper investigates application of SQP optimization algorithm to nonlinear model predictive control. It considers feasible vs. infeasible path methods, sequential vs. simultaneous methods and reduced vs full space methods. A new optimization algorithm coined rFOPT which remains feasibile with respect to inequality constraints is introduced. The suitable choices between these various strategies are assessed informally through a small CSTR case study. The case study also considers the effect various discretization methods have on the optimization problem.

  5. Correspondence optimization in 2D standardized carotid wall thickness map by description length minimization: A tool for increasing reproducibility of 3D ultrasound-based measurements.

    Science.gov (United States)

    Chen, Yimin; Chiu, Bernard

    2016-12-01

    The previously described 2D standardized vessel-wall-plus-plaque thickness (VWT) maps constructed from 3D ultrasound vessel wall measurements using an arc-length (AL) scaling approach adjusted the geometric variability of carotid arteries and has allowed for the comparisons of VWT distributions in longitudinal and cross-sectional studies. However, this mapping technique did not optimize point correspondence of the carotid arteries investigated. The potential misalignment may lead to errors in point-wise VWT comparisons. In this paper, we developed and validated an algorithm based on steepest description length (DL) descent to optimize the point correspondence implied by the 2D VWT maps. The previously described AL approach was applied to obtain initial 2D maps for a group of carotid arteries. The 2D maps were reparameterized based on an iterative steepest DL descent approach, which consists of the following two steps. First, landmarks established by resampling the 2D maps were aligned using the Procrustes algorithm. Then, the gradient of the DL with respect to horizontal and vertical reparameterizations of each landmark on the 2D maps was computed, and the 2D maps were subsequently deformed in the direction of the steepest descent of DL. These two steps were repeated until convergence. The quality of the correspondence was evaluated in a phantom study and an in vivo study involving ten carotid arteries enrolled in a 3D ultrasound interscan variability study. The correspondence quality was evaluated in terms of the compactness and generalization ability of the statistical shape model built based on the established point correspondence in both studies. In the in vivo study, the effect of the proposed algorithm on interscan variability of VWT measurements was evaluated by comparing the percentage of landmarks with statistically significant VWT-change before and after point correspondence optimization. The statistical shape model constructed with optimized

  6. Application of Response Surface Methodology for Optimizing Oil ...

    African Journals Online (AJOL)

    Application of Response Surface Methodology for Optimizing Oil Extraction Yield From ... AFRICAN JOURNALS ONLINE (AJOL) · Journals · Advanced Search ... from tropical almond seed by the use of response surface methodology (RSM).

  7. ZT Optimization: An Application Focus.

    Science.gov (United States)

    Tuley, Richard; Simpson, Kevin

    2017-03-17

    Significant research has been performed on the challenge of improving thermoelectric materials, with maximum peak figure of merit, ZT, the most common target. We use an approximate thermoelectric material model, matched to real materials, to demonstrate that when an application is known, average ZT is a significantly better optimization target. We quantify this difference with some examples, with one scenario showing that changing the doping to increase peak ZT by 19% can lead to a performance drop of 16%. The importance of average ZT means that the temperature at which the ZT peak occurs should be given similar weight to the value of the peak. An ideal material for an application operates across the maximum peak ZT, otherwise maximum performance occurs when the peak value is reduced in order to improve the peak position.

  8. Design Optimization of Time- and Cost-Constrained Fault-Tolerant Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Izosimov, Viacheslav; Pop, Paul; Eles, Petru

    2005-01-01

    In this paper we present an approach to the design optimization of fault-tolerant embedded systems for safety-critical applications. Processes are statically scheduled and communications are performed using the time-triggered protocol. We use process re-execution and replication for tolerating...... transient faults. Our design optimization approach decides the mapping of processes to processors and the assignment of fault-tolerant policies to processes such that transient faults are tolerated and the timing constraints of the application are satisfied. We present several heuristics which are able...

  9. Multiphysics simulation electromechanical system applications and optimization

    CERN Document Server

    Dede, Ercan M; Nomura, Tsuyoshi

    2014-01-01

    This book highlights a unique combination of numerical tools and strategies for handling the challenges of multiphysics simulation, with a specific focus on electromechanical systems as the target application. Features: introduces the concept of design via simulation, along with the role of multiphysics simulation in today's engineering environment; discusses the importance of structural optimization techniques in the design and development of electromechanical systems; provides an overview of the physics commonly involved with electromechanical systems for applications such as electronics, ma

  10. A Bandwidth-Optimized Multi-Core Architecture for Irregular Applications

    Energy Technology Data Exchange (ETDEWEB)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    2012-05-31

    This paper presents an architecture template for next-generation high performance computing systems specifically targeted to irregular applications. We start our work by considering that future generation interconnection and memory bandwidth full-system numbers are expected to grow by a factor of 10. In order to keep up with such a communication capacity, while still resorting to fine-grained multithreading as the main way to tolerate unpredictable memory access latencies of irregular applications, we show how overall performance scaling can benefit from the multi-core paradigm. At the same time, we also show how such an architecture template must be coupled with specific techniques in order to optimize bandwidth utilization and achieve the maximum scalability. We propose a technique based on memory references aggregation, together with the related hardware implementation, as one of such optimization techniques. We explore the proposed architecture template by focusing on the Cray XMT architecture and, using a dedicated simulation infrastructure, validate the performance of our template with two typical irregular applications. Our experimental results prove the benefits provided by both the multi-core approach and the bandwidth optimization reference aggregation technique.

  11. Design and application of star map simulation system for star sensors

    Science.gov (United States)

    Wu, Feng; Shen, Weimin; Zhu, Xifang; Chen, Yuheng; Xu, Qinquan

    2013-12-01

    Modern star sensors are powerful to measure attitude automatically which assure a perfect performance of spacecrafts. They achieve very accurate attitudes by applying algorithms to process star maps obtained by the star camera mounted on them. Therefore, star maps play an important role in designing star cameras and developing procession algorithms. Furthermore, star maps supply significant supports to exam the performance of star sensors completely before their launch. However, it is not always convenient to supply abundant star maps by taking pictures of the sky. Thus, star map simulation with the aid of computer attracts a lot of interests by virtue of its low price and good convenience. A method to simulate star maps by programming and extending the function of the optical design program ZEMAX is proposed. The star map simulation system is established. Firstly, based on analyzing the working procedures of star sensors to measure attitudes and the basic method to design optical system by ZEMAX, the principle of simulating star sensor imaging is given out in detail. The theory about adding false stars and noises, and outputting maps is discussed and the corresponding approaches are proposed. Then, by external programming, the star map simulation program is designed and produced. Its user interference and operation are introduced. Applications of star map simulation method in evaluating optical system, star image extraction algorithm and star identification algorithm, and calibrating system errors are presented completely. It was proved that the proposed simulation method provides magnificent supports to the study on star sensors, and improves the performance of star sensors efficiently.

  12. An Optimal DEM Reconstruction Method for Linear Array Synthetic Aperture Radar Based on Variational Model

    Directory of Open Access Journals (Sweden)

    Shi Jun

    2015-02-01

    Full Text Available Downward-looking Linear Array Synthetic Aperture Radar (LASAR has many potential applications in the topographic mapping, disaster monitoring and reconnaissance applications, especially in the mountainous area. However, limited by the sizes of platforms, its resolution in the linear array direction is always far lower than those in the range and azimuth directions. This disadvantage leads to the blurring of Three-Dimensional (3D images in the linear array direction, and restricts the application of LASAR. To date, the research on 3D SAR image enhancement has focused on the sparse recovery technique. In this case, the one-to-one mapping of Digital Elevation Model (DEM brakes down. To overcome this, an optimal DEM reconstruction method for LASAR based on the variational model is discussed in an effort to optimize the DEM and the associated scattering coefficient map, and to minimize the Mean Square Error (MSE. Using simulation experiments, it is found that the variational model is more suitable for DEM enhancement applications to all kinds of terrains compared with the Orthogonal Matching Pursuit (OMPand Least Absolute Shrinkage and Selection Operator (LASSO methods.

  13. Introduction of digital soil mapping techniques for the nationwide regionalization of soil condition in Hungary; the first results of the DOSoReMI.hu (Digital, Optimized, Soil Related Maps and Information in Hungary) project

    Science.gov (United States)

    Pásztor, László; Laborczi, Annamária; Szatmári, Gábor; Takács, Katalin; Bakacsi, Zsófia; Szabó, József; Dobos, Endre

    2014-05-01

    Due to the former soil surveys and mapping activities significant amount of soil information has accumulated in Hungary. Present soil data requirements are mainly fulfilled with these available datasets either by their direct usage or after certain specific and generally fortuitous, thematic and/or spatial inference. Due to the more and more frequently emerging discrepancies between the available and the expected data, there might be notable imperfection as for the accuracy and reliability of the delivered products. With a recently started project (DOSoReMI.hu; Digital, Optimized, Soil Related Maps and Information in Hungary) we would like to significantly extend the potential, how countrywide soil information requirements could be satisfied in Hungary. We started to compile digital soil related maps which fulfil optimally the national and international demands from points of view of thematic, spatial and temporal accuracy. The spatial resolution of the targeted countrywide, digital, thematic maps is at least 1:50.000 (approx. 50-100 meter raster resolution). DOSoReMI.hu results are also planned to contribute to the European part of GSM.net products. In addition to the auxiliary, spatial data themes related to soil forming factors and/or to indicative environmental elements we heavily lean on the various national soil databases. The set of the applied digital soil mapping techniques is gradually broadened incorporating and eventually integrating geostatistical, data mining and GIS tools. In our paper we will present the first results. - Regression kriging (RK) has been used for the spatial inference of certain quantitative data, like particle size distribution components, rootable depth and organic matter content. In the course of RK-based mapping spatially segmented categorical information provided by the SMUs of Digital Kreybig Soil Information System (DKSIS) has been also used in the form of indicator variables. - Classification and regression trees (CART) were

  14. Optimization of application execution in the ViroLab Virtual Laboratory

    NARCIS (Netherlands)

    Malawski, M.; Kocot, J.; Ciepiela, E.; Bubak, M.; Bubak, M.; Turała, M.; Wiatr, K.

    2008-01-01

    The objective of the presented work is to describe an optimization engine for the ViroLab Virtual Laboratory runtime. The Laboratory specific model - invocation of operations on special objects which reside on Grid resources - imposes a new approach to optimization of Grid application execution.

  15. Topology Optimization - broadening the areas of application

    DEFF Research Database (Denmark)

    Bendsøe, Martin P.; Lund, Erik; Olhoff, Niels

    2005-01-01

    This paper deals with recent developments of topology optimization techniques for application in some new types of design problems. The emphasis is on recent work of the Danish research groups at Aalborg University and at the Technical University of Denmark and focus is on the central role that t...

  16. Robust subspace estimation using low-rank optimization theory and applications

    CERN Document Server

    Oreifej, Omar

    2014-01-01

    Various fundamental applications in computer vision and machine learning require finding the basis of a certain subspace. Examples of such applications include face detection, motion estimation, and activity recognition. An increasing interest has been recently placed on this area as a result of significant advances in the mathematics of matrix rank optimization. Interestingly, robust subspace estimation can be posed as a low-rank optimization problem, which can be solved efficiently using techniques such as the method of Augmented Lagrange Multiplier. In this book,?the authors?discuss fundame

  17. Uniform, optimal signal processing of mapped deep-sequencing data.

    Science.gov (United States)

    Kumar, Vibhor; Muratani, Masafumi; Rayan, Nirmala Arul; Kraus, Petra; Lufkin, Thomas; Ng, Huck Hui; Prabhakar, Shyam

    2013-07-01

    Despite their apparent diversity, many problems in the analysis of high-throughput sequencing data are merely special cases of two general problems, signal detection and signal estimation. Here we adapt formally optimal solutions from signal processing theory to analyze signals of DNA sequence reads mapped to a genome. We describe DFilter, a detection algorithm that identifies regulatory features in ChIP-seq, DNase-seq and FAIRE-seq data more accurately than assay-specific algorithms. We also describe EFilter, an estimation algorithm that accurately predicts mRNA levels from as few as 1-2 histone profiles (R ∼0.9). Notably, the presence of regulatory motifs in promoters correlates more with histone modifications than with mRNA levels, suggesting that histone profiles are more predictive of cis-regulatory mechanisms. We show by applying DFilter and EFilter to embryonic forebrain ChIP-seq data that regulatory protein identification and functional annotation are feasible despite tissue heterogeneity. The mathematical formalism underlying our tools facilitates integrative analysis of data from virtually any sequencing-based functional profile.

  18. Generalized Smooth Transition Map Between Tent and Logistic Maps

    Science.gov (United States)

    Sayed, Wafaa S.; Fahmy, Hossam A. H.; Rezk, Ahmed A.; Radwan, Ahmed G.

    There is a continuous demand on novel chaotic generators to be employed in various modeling and pseudo-random number generation applications. This paper proposes a new chaotic map which is a general form for one-dimensional discrete-time maps employing the power function with the tent and logistic maps as special cases. The proposed map uses extra parameters to provide responses that fit multiple applications for which conventional maps were not enough. The proposed generalization covers also maps whose iterative relations are not based on polynomials, i.e. with fractional powers. We introduce a framework for analyzing the proposed map mathematically and predicting its behavior for various combinations of its parameters. In addition, we present and explain the transition map which results in intermediate responses as the parameters vary from their values corresponding to tent map to those corresponding to logistic map case. We study the properties of the proposed map including graph of the map equation, general bifurcation diagram and its key-points, output sequences, and maximum Lyapunov exponent. We present further explorations such as effects of scaling, system response with respect to the new parameters, and operating ranges other than transition region. Finally, a stream cipher system based on the generalized transition map validates its utility for image encryption applications. The system allows the construction of more efficient encryption keys which enhances its sensitivity and other cryptographic properties.

  19. Optimization approaches to volumetric modulated arc therapy planning

    Energy Technology Data Exchange (ETDEWEB)

    Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu; Bortfeld, Thomas; Craft, David [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Alber, Markus [Department of Medical Physics and Department of Radiation Oncology, Aarhus University Hospital, Aarhus C DK-8000 (Denmark); Bangert, Mark [Department of Medical Physics in Radiation Oncology, German Cancer Research Center, Heidelberg D-69120 (Germany); Bokrantz, Rasmus [RaySearch Laboratories, Stockholm SE-111 34 (Sweden); Chen, Danny [Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Li, Ruijiang; Xing, Lei [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Men, Chunhua [Department of Research, Elekta, Maryland Heights, Missouri 63043 (United States); Nill, Simeon [Joint Department of Physics at The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG (United Kingdom); Papp, Dávid [Department of Mathematics, North Carolina State University, Raleigh, North Carolina 27695 (United States); Romeijn, Edwin [H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Salari, Ehsan [Department of Industrial and Manufacturing Engineering, Wichita State University, Wichita, Kansas 67260 (United States)

    2015-03-15

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  20. An Automated Pipeline for Engineering Many-Enzyme Pathways: Computational Sequence Design, Pathway Expression-Flux Mapping, and Scalable Pathway Optimization.

    Science.gov (United States)

    Halper, Sean M; Cetnar, Daniel P; Salis, Howard M

    2018-01-01

    Engineering many-enzyme metabolic pathways suffers from the design curse of dimensionality. There are an astronomical number of synonymous DNA sequence choices, though relatively few will express an evolutionary robust, maximally productive pathway without metabolic bottlenecks. To solve this challenge, we have developed an integrated, automated computational-experimental pipeline that identifies a pathway's optimal DNA sequence without high-throughput screening or many cycles of design-build-test. The first step applies our Operon Calculator algorithm to design a host-specific evolutionary robust bacterial operon sequence with maximally tunable enzyme expression levels. The second step applies our RBS Library Calculator algorithm to systematically vary enzyme expression levels with the smallest-sized library. After characterizing a small number of constructed pathway variants, measurements are supplied to our Pathway Map Calculator algorithm, which then parameterizes a kinetic metabolic model that ultimately predicts the pathway's optimal enzyme expression levels and DNA sequences. Altogether, our algorithms provide the ability to efficiently map the pathway's sequence-expression-activity space and predict DNA sequences with desired metabolic fluxes. Here, we provide a step-by-step guide to applying the Pathway Optimization Pipeline on a desired multi-enzyme pathway in a bacterial host.

  1. Mapping optimal areas of ecosystem services potential in Vilnius (Lithuania)

    Science.gov (United States)

    Pereira, Paulo; Depellegrin, Daniel; Misiune, Ieva; Cerda, Artemi

    2016-04-01

    Maps are fundamental to understand the spatial pattern of natural and human impacts on the landscape (Brevik et al., 2016; Lavado Contador et al., 2009; Pereira et al., 2010a,b). Urban areas are subjected to an intense human pressure (Beniston et al., 2015), contributing to the degradation of the ecosystems, reducing their capacity to provide services in quality and quantity (Requier-Desjardins et al., 2011; Zhang et al., 2011). Environments that can provide a high number and quality of ecosystem services (ES) must be identified and managed correctly, since are spaces that can mitigate the impacts of human settlements and improve their quality. thus is of major importance have identify the areas that can provide better ES (Deppelegrin and Pereira, 2015). The aim of this work is to identify areas with high ES potential in Vilnius city. Here, we identified a total of 4 different land uses, agricultural areas (32.48%), water bodies (1.46%), forest and semi-natural (31.91%) areas and artificial surfaces (34.16%). CORINE land cover 2006 was used as base information to classify ES potential. The assessment of each land cover potential was carried out using expert assessment. Each land use type was ranked from 0 (no potential) to 5 (High potential). In this work the sum of total regulating, providing and cultural ES were assessed. The areas with optimal ES were the ones with the sum of all ranks equal or higher than the 3rd Quartil of each distribution. After identifying these areas, data was mapped using ArcGIS software. The results showed that on average Vilnius city has a higher potential for regulating services (20.35±15.92), followed by cultural (14.43±8.81) and providing (14.26±8.87). There was a significant correlation among the different type of services. Regulating vs cultural (0.92, p<0.001), regulating vs providing (0.72, p<0.001) and providing vs cultural (0.65, p<0.001). The results of Morans I autocorrelation index showed that regulating (Z-score: 10

  2. Finite Precision Logistic Map between Computational Efficiency and Accuracy with Encryption Applications

    Directory of Open Access Journals (Sweden)

    Wafaa S. Sayed

    2017-01-01

    Full Text Available Chaotic systems appear in many applications such as pseudo-random number generation, text encryption, and secure image transfer. Numerical solutions of these systems using digital software or hardware inevitably deviate from the expected analytical solutions. Chaotic orbits produced using finite precision systems do not exhibit the infinite period expected under the assumptions of infinite simulation time and precision. In this paper, digital implementation of the generalized logistic map with signed parameter is considered. We present a fixed-point hardware realization of a Pseudo-Random Number Generator using the logistic map that experiences a trade-off between computational efficiency and accuracy. Several introduced factors such as the used precision, the order of execution of the operations, parameter, and initial point values affect the properties of the finite precision map. For positive and negative parameter cases, the studied properties include bifurcation points, output range, maximum Lyapunov exponent, and period length. The performance of the finite precision logistic map is compared in the two cases. A basic stream cipher system is realized to evaluate the system performance for encryption applications for different bus sizes regarding the encryption key size, hardware requirements, maximum clock frequency, NIST and correlation, histogram, entropy, and Mean Absolute Error analyses of encrypted images.

  3. Platform-dependent optimization considerations for mHealth applications

    Science.gov (United States)

    Kaghyan, Sahak; Akopian, David; Sarukhanyan, Hakob

    2015-03-01

    Modern mobile devices contain integrated sensors that enable multitude of applications in such fields as mobile health (mHealth), entertainment, sports, etc. Human physical activity monitoring is one of such the emerging applications. There exists a range of challenges that relate to activity monitoring tasks, and, particularly, exploiting optimal solutions and architectures for respective mobile software application development. This work addresses mobile computations related to integrated inertial sensors for activity monitoring, such as accelerometers, gyroscopes, integrated global positioning system (GPS) and WLAN-based positioning, that can be used for activity monitoring. Some of the aspects will be discussed in this paper. Each of the sensing data sources has its own characteristics such as specific data formats, data rates, signal acquisition durations etc., and these specifications affect energy consumption. Energy consumption significantly varies as sensor data acquisition is followed by data analysis including various transformations and signal processing algorithms. This paper will address several aspects of more optimal activity monitoring implementations exploiting state-of-the-art capabilities of modern platforms.

  4. Methodological approach to strategic performance optimization

    OpenAIRE

    Hell, Marko; Vidačić, Stjepan; Garača, Željko

    2009-01-01

    This paper presents a matrix approach to the measuring and optimization of organizational strategic performance. The proposed model is based on the matrix presentation of strategic performance, which follows the theoretical notions of the balanced scorecard (BSC) and strategy map methodologies, initially developed by Kaplan and Norton. Development of a quantitative record of strategic objectives provides an arena for the application of linear programming (LP), which is a mathematical tech...

  5. Chaotically encoded particle swarm optimization algorithm and its applications

    International Nuclear Information System (INIS)

    Alatas, Bilal; Akin, Erhan

    2009-01-01

    This paper proposes a novel particle swarm optimization (PSO) algorithm, chaotically encoded particle swarm optimization algorithm (CENPSOA), based on the notion of chaos numbers that have been recently proposed for a novel meaning to numbers. In this paper, various chaos arithmetic and evaluation measures that can be used in CENPSOA have been described. Furthermore, CENPSOA has been designed to be effectively utilized in data mining applications.

  6. Apodization Optimization of FBG Strain Sensor for Quasi-Distributed Sensing Measurement Applications

    Directory of Open Access Journals (Sweden)

    Fahd Chaoui

    2016-01-01

    Full Text Available A novel optimized apodization of Fiber Bragg Grating Sensor (FBGS for quasi-distributed strain sensing applications is developed and introduced in this paper. The main objective of the proposed optimization is to obtain a reflectivity level higher than 90% and a side lobe level around −40 dB, which is suitable for use in quasi-distributed strain sensing application. For this purpose, different design parameters as apodization profile, grating length, and refractive index have been investigated to enhance and optimize the FBGS design. The performance of the proposed apodization has then been compared in terms of reflectivity, side lobe level (SLL, and full width at half maximum (FWHM with apodization profiles proposed by other authors. The optimized sensor is integrated on quasi-distributed sensing system of 8 sensors demonstrating high reliability. Wide strain sensitivity range for each channel has also been achieved in the quasi-distributed system. Results prove the efficiency of the proposed optimization which can be further implemented for any quasi-distributed sensing application.

  7. DEVELOPING WEB MAPPING APPLICATION USING ARCGIS SERVER WEB APPLICATION DEVELOPMEN FRAMEWORK (ADF FOR GEOSPATIAL DATA GENERATED DURING REHABILITATION AND RECONSTRUCTION PROCESS OF POST-TSUNAMI 2004 DISASTER IN ACEH

    Directory of Open Access Journals (Sweden)

    Nizamuddin Nizamuddin

    2014-04-01

    Full Text Available ESRI ArcGIS Server is equipped with ArcGIS Server Web Application Development Framework (ADF and ArcGIS Web Controls integration for Visual Studio.NET. Both the ArcGIS Server Manager for .NET and ArcGIS Web Controls can be easily utilized for developing the ASP.NET based ESRI Web mapping application. In  this study we implemented both tools for developing the ASP.NET based ESRI Web mapping application for geospatial data generated dring rehabilitation and reconstruction process of post-tsunami 2004 disaster in Aceh province. Rehabilitation and reconstruction process has produced a tremendous amount of geospatial data. This method was chosen in this study because in the process of developing  a web mapping application, one can easily and quickly create Mapping Services of huge geospatial data and also develop Web mapping application without writing any code. However, when utilizing Visual Studio.NET 2008, one needs to have some coding ability.

  8. Topic maps standard and its application in library and information science

    Directory of Open Access Journals (Sweden)

    Fatemeh Baji

    2014-09-01

    Full Text Available Topic maps are an ISO standard (ISO 13250 that is used for presenting the information about information resources structures. The initial idea of this standard was raised in 1991 and due to its strength; it turned into an ISO standard. This paper investigates concepts and model of topic maps and aims to mention applications of this standard in library and information science (LIS realm. A topic map, as a type of document is defined as XML or SGML technically. Research show that this standard is compatible with some of LIS techniques and rules especially in knowledge organization, but it attempts to use these rules in the web. So it can be said that according to some challenges that LIS field faces in adapting traditional techniques for knowledge organization in the Web, topic maps standard can help in solving such problems and challenges and this is what some experts of LIS tried to do.

  9. On the application of Discrete Time Optimal Control Concepts to ...

    African Journals Online (AJOL)

    On the application of Discrete Time Optimal Control Concepts to Economic Problems. ... Journal of the Nigerian Association of Mathematical Physics ... Abstract. An extension of the use of the maximum principle to solve Discrete-time Optimal Control Problems (DTOCP), in which the state equations are in the form of general ...

  10. Optimizing Transmission Network Expansion Planning With The Mean Of Chaotic Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmed R. Abdelaziz

    2015-08-01

    Full Text Available This paper presents an application of Chaotic differential evolution optimization approach meta-heuristics in solving transmission network expansion planning TNEP using an AC model associated with reactive power planning RPP. The reliabilityredundancy of network analysis optimization problems implicate selection of components with multiple choices and redundancy levels that produce maximum benefits can be subject to the cost weight and volume constraints is presented in this paper. Classical mathematical methods have failed in handling non-convexities and non-smoothness in optimization problems. As an alternative to the classical optimization approaches the meta-heuristics have attracted lot of attention due to their ability to find an almost global optimal solution in reliabilityredundancy optimization problems. Evolutionary algorithms EAs paradigms of evolutionary computation field are stochastic and robust meta-heuristics useful to solve reliabilityredundancy optimization problems. EAs such as genetic algorithm evolutionary programming evolution strategies and differential evolution are being used to find global or near global optimal solution. The Differential Evolution Algorithm DEA population-based algorithm is an optimal algorithm with powerful global searching capability but it is usually in low convergence speed and presents bad searching capability in the later evolution stage. A new Chaotic Differential Evolution algorithm CDE based on the cat map is recommended which combines DE and chaotic searching algorithm. Simulation results and comparisons show that the chaotic differential evolution algorithm using Cat map is competitive and stable in performance with other optimization approaches and other maps.

  11. Regional Geology Web Map Application Development: Javascript v2.0

    International Nuclear Information System (INIS)

    Russell, Glenn

    2017-01-01

    This is a milestone report for the FY2017 continuation of the Spent Fuel, Storage, and Waste, Technology (SFSWT) program (formerly Used Fuel Disposal (UFD) program) development of the Regional Geology Web Mapping Application by the Idaho National Laboratory Geospatial Science and Engineering group. This application was developed for general public use and is an interactive web-based application built in Javascript to visualize, reference, and analyze US pertinent geological features of the SFSWT program. This tool is a version upgrade from Adobe FLEX technology. It is designed to facilitate informed decision making of the geology of continental US relevant to the SFSWT program.

  12. Regional Geology Web Map Application Development: Javascript v2.0

    Energy Technology Data Exchange (ETDEWEB)

    Russell, Glenn [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-06-19

    This is a milestone report for the FY2017 continuation of the Spent Fuel, Storage, and Waste, Technology (SFSWT) program (formerly Used Fuel Disposal (UFD) program) development of the Regional Geology Web Mapping Application by the Idaho National Laboratory Geospatial Science and Engineering group. This application was developed for general public use and is an interactive web-based application built in Javascript to visualize, reference, and analyze US pertinent geological features of the SFSWT program. This tool is a version upgrade from Adobe FLEX technology. It is designed to facilitate informed decision making of the geology of continental US relevant to the SFSWT program.

  13. An application of the multilayer perceptron: Solar radiation maps in Spain

    Energy Technology Data Exchange (ETDEWEB)

    Hontoria, L.; Aguilera, J. [Grupo Investigacion y Desarrollo en Energia Solar y Automatica, Dpto. de Ingenieria Electronica, de Telecomunicaciones y Automatica, Escuela Politecnica Superior de Jaen, Campus de las Lagunillas, Universidad de Jaen, 23071 Jaen (Spain); Zufiria, P. [Grupo de Redes Neuronales, Dpto. de Matematica Aplicada a las Tecnologias de la Informacion, ETSI Telecomunicaciones, UPM Ciudad Universitaria s/n, 28040 Madrid (Spain)

    2005-11-01

    In this work an application of a methodology to obtain solar radiation maps is presented. This methodology is based on a neural network system [Lippmann, R.P., 1987. An introduction to computing with neural nets. IEEE ASSP Magazine, 4-22] called Multi-Layer Perceptron (MLP) [Haykin, S., 1994. Neural Networks. A Comprehensive Foundation. Macmillan Publishing Company; Hornik, K., Stinchcombe, M., White, H., 1989. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5), 359-366]. To obtain a solar radiation map it is necessary to know the solar radiation of many points spread wide across the zone of the map where it is going to be drawn. For most of the locations all over the world the records of these data (solar radiation in whatever scale, daily or hourly values) are non-existent. Only very few locations have the privilege of having good meteorological stations where records of solar radiation have being registered. But even in those locations with historical records of solar data, the quality of these solar series is not as good as it should be for most purposes. In addition, to draw solar radiation maps the number of points on the maps (real sites) that it is necessary to work with makes this problem difficult to solve. Nevertheless, with the application of the methodology proposed in this paper, this problem has been solved and solar radiation maps have been obtained for a small region of Spain: Jaen province, a southern province of Spain between parallels 38{sup o}25' N and 37{sup o}25' N, and meridians 4{sup o}10' W and 2{sup o}10' W, and for a larger region: Andalucia, the most southern region of Spain situated between parallels 38{sup o}40' N and 36{sup o}00' N, and meridians 7{sup o}30' W and 1{sup o}40' W. (author)

  14. Application of approximate pattern matching in two dimensional spaces to grid layout for biochemical network maps.

    Science.gov (United States)

    Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki

    2012-01-01

    For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html.

  15. Application of approximate pattern matching in two dimensional spaces to grid layout for biochemical network maps.

    Directory of Open Access Journals (Sweden)

    Kentaro Inoue

    Full Text Available BACKGROUND: For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. RESULTS: We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. CONCLUSIONS: Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html.

  16. Bandgap Optimization of Perovskite Semiconductors for Photovoltaic Applications.

    Science.gov (United States)

    Xiao, Zewen; Zhou, Yuanyuan; Hosono, Hideo; Kamiya, Toshio; Padture, Nitin P

    2018-02-16

    The bandgap is the most important physical property that determines the potential of semiconductors for photovoltaic (PV) applications. This Minireview discusses the parameters affecting the bandgap of perovskite semiconductors that are being widely studied for PV applications, and the recent progress in the optimization of the bandgaps of these materials. Perspectives are also provided for guiding future research in this area. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. An Offline-Online Android Application for Hazard Event Mapping Using WebGIS Open Source Technologies

    Science.gov (United States)

    Olyazadeh, Roya; Jaboyedoff, Michel; Sudmeier-Rieux, Karen; Derron, Marc-Henri; Devkota, Sanjaya

    2016-04-01

    Nowadays, Free and Open Source Software (FOSS) plays an important role in better understanding and managing disaster risk reduction around the world. National and local government, NGOs and other stakeholders are increasingly seeking and producing data on hazards. Most of the hazard event inventories and land use mapping are based on remote sensing data, with little ground truthing, creating difficulties depending on the terrain and accessibility. Open Source WebGIS tools offer an opportunity for quicker and easier ground truthing of critical areas in order to analyse hazard patterns and triggering factors. This study presents a secure mobile-map application for hazard event mapping using Open Source WebGIS technologies such as Postgres database, Postgis, Leaflet, Cordova and Phonegap. The objectives of this prototype are: 1. An Offline-Online android mobile application with advanced Geospatial visualisation; 2. Easy Collection and storage of events information applied services; 3. Centralized data storage with accessibility by all the service (smartphone, standard web browser); 4. Improving data management by using active participation in hazard event mapping and storage. This application has been implemented as a low-cost, rapid and participatory method for recording impacts from hazard events and includes geolocation (GPS data and Internet), visualizing maps with overlay of satellite images, viewing uploaded images and events as cluster points, drawing and adding event information. The data can be recorded in offline (Android device) or online version (all browsers) and consequently uploaded through the server whenever internet is available. All the events and records can be visualized by an administrator and made public after approval. Different user levels can be defined to access the data for communicating the information. This application was tested for landslides in post-earthquake Nepal but can be used for any other type of hazards such as flood, avalanche

  18. Application of processing maps in the optimization of the parameters of a hot working process. Part 2. Processing maps of a microalloyed medium carbon steel

    International Nuclear Information System (INIS)

    Al Omar, A.; Cabrera, J.M.; Prado, J.M.

    1997-01-01

    Part 1 of this work presents a revision of the general characteristics of the so called dynamic materials model on which processing maps are developed. In this part following the methodology described in part 1, processing maps of a microalloyed medium carbon steel are developed over a temperature range varying from 900 to 1.150 degree centigree at different true strain rates ranging from 10''-4 to 10s''-1. The analysis of these maps revealed a domain of dynamic recrystallization centred at about 1.1.50 degree centigree and strain rate 10 s''-1 and a domain of dynamic recovery centred at 900 degree centigree and 0,1 s''-1. (Author) 20 refs

  19. Calibration of groundwater vulnerability mapping using the generalized reduced gradient method.

    Science.gov (United States)

    Elçi, Alper

    2017-12-01

    Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Teaching learning based optimization algorithm and its engineering applications

    CERN Document Server

    Rao, R Venkata

    2016-01-01

    Describing a new optimization algorithm, the “Teaching-Learning-Based Optimization (TLBO),” in a clear and lucid style, this book maximizes reader insights into how the TLBO algorithm can be used to solve continuous and discrete optimization problems involving single or multiple objectives. As the algorithm operates on the principle of teaching and learning, where teachers influence the quality of learners’ results, the elitist version of TLBO algorithm (ETLBO) is described along with applications of the TLBO algorithm in the fields of electrical engineering, mechanical design, thermal engineering, manufacturing engineering, civil engineering, structural engineering, computer engineering, electronics engineering, physics and biotechnology. The book offers a valuable resource for scientists, engineers and practitioners involved in the development and usage of advanced optimization algorithms.

  1. Non-euclidean simplex optimization. [Application to potentiometric titration of Pu

    Energy Technology Data Exchange (ETDEWEB)

    Silver, G.L.

    1977-08-15

    Geometric optimization techniques useful for studying chemical equilibrium traditionally rely upon principles of euclidean geometry, but such algorithms may also be based upon principles of a non-euclidean geometry. The sequential simplex method is adapted to the hyperbolic plane, and application of optimization to problems such as the potentiometric titration of plutonium is suggested.

  2. Aspects Concerning the Optimization of Authentication Process for Distributed Applications

    Directory of Open Access Journals (Sweden)

    Ion Ivan

    2008-06-01

    Full Text Available There will be presented the types of distributed applications. The quality characteristics for distributed applications will be analyzed. There will be established the ways to assign access rights. The authentication category will be analyzed. We will propose an algorithm for the optimization of authentication process.For the application “Evaluation of TIC projects” the algorithm proposed will be applied.

  3. AERYN: A simple standalone application for visualizing and enhancing elemental maps

    International Nuclear Information System (INIS)

    Mouchi, Vincent; Crowley, Quentin G.; Ubide, Teresa

    2016-01-01

    Interpretation of high spatial resolution elemental mineral maps can be hindered by high frequency fluctuations, as well as by strong naturally-occurring or analytically-induced variations. We have developed a new standalone program named AERYN (Aspect Enhancement by Removing Yielded Noise) to produce more reliable element distribution maps from previously reduced geochemical data. The program is Matlab-based, designed with a graphic user interface and is capable of rapidly generating elemental maps from data acquired by a range of analytical techniques. A visual interface aids selection of appropriate outlier rejection and drift-correction parameters, thereby facilitating recognition of subtle elemental fluctuations which may otherwise be obscured. Examples of use are provided for quantitative trace element maps acquired using both laser ablation (LA-) ICP-MS and electron probe microanalysis (EPMA) of the cold-water coral Lophelia pertusa. We demonstrate how AERYN allows recognition of high frequency elemental fluctuations, including those which occur perpendicular to the maximum concentration gradient. Such data treatment compliments commonly used processing methods to provide greater flexibility and control in producing elemental maps from micro-analytical techniques. - Highlights: • Matlab-based application to improve visualization of elemental maps. • Capable of detrending when data set shows drift. • Compatible with processed data text files from LA-ICP-MS, EDS and EPMA. • Option to filter geochemical trends to observe high-frequency fluctuations.

  4. Contribution to the evaluation and to the improvement of multi-objective optimization methods: application to the optimization of nuclear fuel reloading pattern

    International Nuclear Information System (INIS)

    Collette, Y.

    2002-01-01

    methods called MOCOSA and NSCOSA mainly based on the COSA method, which simulated a genetic algorithm by just using tools from the simulated annealing and, therefore, without crossover operator. The MOCOSA and NSCOSA methods uses tools from the MOGA and NSGA methods based on genetic algorithms. An other problem related to multi-objective optimization is the problem of data visualization. A redundant type of multi-objective problem is treated in scientific literature: the bi-objective problem, easy to illustrate. We propose, in this thesis, some methods allowing to visualize solutions set of arbitrary dimensions and, particularly, the MCDM method ('Multi-objective Concordance Discordance Mapping') which transforms a real multi-objective optimization problem (a problem which has more than two objective functions) in a simpler bi-objective problem. We have also defined new multidimensional transformation methods that are able to conserve a relation of order (such as dominance relation). The application of this transformation gives birth to the MCDM PC method ('Multi-objective Concordance Discordance Mapping Pareto Conservative'). Moreover, we have defined a new classification of multi-objective optimization methods with the goal to ease the choice of a multi-objective optimization method to solve a given problem. To focus this classification, we have extracted from multi-objective optimization methods the most important elements and we have organized these elements as a hierarchy. The 'navigation' through this hierarchy is done through some simple questions asked to the user, in direct relationship to the given problem. These results are applied to the multi-objective optimization of nuclear core reload pattern, which is composed of security constraints and economic criteria. This combinatorial optimization problem can be illustrated by using a check covered by pawns where a pawn corresponds to a nuclear assembly. The goal is to find a distribution of pawns so as to minimize

  5. The Optimization by Using the Learning Styles in the Adaptive Hypermedia Applications

    Science.gov (United States)

    Hamza, Lamia; Tlili, Guiassa Yamina

    2018-01-01

    This article addresses the learning style as a criterion for optimization of adaptive content in hypermedia applications. First, the authors present the different optimization approaches proposed in the area of adaptive hypermedia systems whose goal is to define the optimization problem in this type of system. Then, they present the architecture…

  6. Combinatorial materials synthesis and high-throughput screening: an integrated materials chip approach to mapping phase diagrams and discovery and optimization of functional materials.

    Science.gov (United States)

    Xiang, X D

    Combinatorial materials synthesis methods and high-throughput evaluation techniques have been developed to accelerate the process of materials discovery and optimization and phase-diagram mapping. Analogous to integrated circuit chips, integrated materials chips containing thousands of discrete different compositions or continuous phase diagrams, often in the form of high-quality epitaxial thin films, can be fabricated and screened for interesting properties. Microspot x-ray method, various optical measurement techniques, and a novel evanescent microwave microscope have been used to characterize the structural, optical, magnetic, and electrical properties of samples on the materials chips. These techniques are routinely used to discover/optimize and map phase diagrams of ferroelectric, dielectric, optical, magnetic, and superconducting materials.

  7. Optimal CT scanning parameters for commonly used tumor ablation applicators

    International Nuclear Information System (INIS)

    Eltorai, Adam E.M.; Baird, Grayson L.; Monu, Nicholas; Wolf, Farrah; Seidler, Michael; Collins, Scott; Kim, Jeomsoon; Dupuy, Damian E.

    2017-01-01

    Highlights: • This study aimed to determine optimal scanning parameters for commonly-used tumor ablation applicators. • The findings illustrate the overall interaction of the effects of kVp, ASiR, and reconstruction algorithm within and between probes, so that radiologists may easily reference optimal imaging performance. • Optimum combinations for each probe are provided. - Abstract: Purpose: CT-beam hardening artifact can make tumor margin visualization and its relationship to the ablation applicator tip challenging. To determine optimal scanning parameters for commonly-used applicators. Materials and methods: Applicators were placed in ex-vivo cow livers with implanted mock tumors, surrounded by bolus gel. Various CT scans were performed at 440 mA with 5 mm thickness changing kVp, scan time, ASiR, scan type, pitch, and reconstruction algorithm. Four radiologists blindly scored the images for image quality and artifact quantitatively. Results: A significant relationship between probe, kVp level, ASiR level, and reconstruction algorithm was observed concerning both image artifact and image quality (both p = <0.0001). Specifically, there are certain combinations of kVp, ASiR, and reconstruction algorithm that yield better images than other combinations. In particular, one probe performed equivalently or better than any competing probe considered here, regardless of kVp, ASiR, and reconstruction algorithm combination. Conclusion: The findings illustrate the overall interaction of the effects of kVp, ASiR, and reconstruction algorithm within and between probes, so that radiologists may easily reference optimal imaging performance for a certain combinations of kVp, ASiR, reconstruction algorithm and probes at their disposal. Optimum combinations for each probe are provided.

  8. Optimal CT scanning parameters for commonly used tumor ablation applicators

    Energy Technology Data Exchange (ETDEWEB)

    Eltorai, Adam E.M. [Warren Alpert Medical School of Brown University (United States); Baird, Grayson L. [Department of Diagnostic Imaging (United States); Warren Alpert Medical School of Brown University (United States); Lifespan Biostatistics Core (United States); Rhode Island Hospital (United States); Monu, Nicholas; Wolf, Farrah; Seidler, Michael [Department of Diagnostic Imaging (United States); Warren Alpert Medical School of Brown University (United States); Rhode Island Hospital (United States); Collins, Scott [Department of Diagnostic Imaging (United States); Rhode Island Hospital (United States); Kim, Jeomsoon [Department of Medical Physics (United States); Rhode Island Hospital (United States); Dupuy, Damian E., E-mail: ddupuy@comcast.net [Department of Diagnostic Imaging (United States); Warren Alpert Medical School of Brown University (United States); Rhode Island Hospital (United States)

    2017-04-15

    Highlights: • This study aimed to determine optimal scanning parameters for commonly-used tumor ablation applicators. • The findings illustrate the overall interaction of the effects of kVp, ASiR, and reconstruction algorithm within and between probes, so that radiologists may easily reference optimal imaging performance. • Optimum combinations for each probe are provided. - Abstract: Purpose: CT-beam hardening artifact can make tumor margin visualization and its relationship to the ablation applicator tip challenging. To determine optimal scanning parameters for commonly-used applicators. Materials and methods: Applicators were placed in ex-vivo cow livers with implanted mock tumors, surrounded by bolus gel. Various CT scans were performed at 440 mA with 5 mm thickness changing kVp, scan time, ASiR, scan type, pitch, and reconstruction algorithm. Four radiologists blindly scored the images for image quality and artifact quantitatively. Results: A significant relationship between probe, kVp level, ASiR level, and reconstruction algorithm was observed concerning both image artifact and image quality (both p = <0.0001). Specifically, there are certain combinations of kVp, ASiR, and reconstruction algorithm that yield better images than other combinations. In particular, one probe performed equivalently or better than any competing probe considered here, regardless of kVp, ASiR, and reconstruction algorithm combination. Conclusion: The findings illustrate the overall interaction of the effects of kVp, ASiR, and reconstruction algorithm within and between probes, so that radiologists may easily reference optimal imaging performance for a certain combinations of kVp, ASiR, reconstruction algorithm and probes at their disposal. Optimum combinations for each probe are provided.

  9. An Ameliorative Whale Optimization Algorithm for Multi-Objective Optimal Allocation of Water Resources in Handan, China

    Directory of Open Access Journals (Sweden)

    Zhihong Yan

    2018-01-01

    Full Text Available With the deepening discrepancy between water supply and demand caused by water shortages, alleviating water shortages by optimizing water resource allocation has received extensive attention. How to allocate water resources optimally, rapidly, and effectively has become a challenging problem. Thus, this study employs a meta-heuristic swarm-based algorithm, the whale optimization algorithm (WOA. To overcome drawbacks like relatively low convergence precision and convergence rates, when applying the WOA algorithm to complex optimization problems, logistic mapping is used to initialize swarm location, and inertia weighting is employed to improve the algorithm. The resulting ameliorative whale optimization algorithm (AWOA shows substantially enhanced convergence rates and precision than the WOA and particle swarm optimization algorithms, demonstrating relatively high reliability and applicability. A water resource allocation optimization model with optimal economic efficiency and least total water shortage volume is established for Handan, China, and solved by the AWOA. The allocation results better reflect actual water usage in Handan. In 2030, the p = 50% total water shortage is forecast as 404.34 × 106 m3 or 14.8%. The shortage is mainly in the primary agricultural sector. The allocation results provide a reference for regional water resources management.

  10. Optimized application of systems engineering to nuclear waste repository projects

    International Nuclear Information System (INIS)

    Miskimin, P.A.; Shepard, M.

    1986-01-01

    The purpose of this presentation is to describe a fully optimized application of systems engineering methods and philosophy to the management of a large nuclear waste repository project. Knowledge gained from actual experience with the use of the systems approach on two repository projects is incorporated in the material presented. The projects are currently evaluating the isolation performance of different geologic settings and are in different phases of maturity. Systems engineering methods were applied by the principal author at the Waste Isolation Pilot Plant (WIPP) in the form of a functional analysis. At the Basalt Waste Isolation Project (BWIP), the authors assisted the intergrating contractor with the development and application of systems engineering methods. Based on this experience and that acquired from other waste management projects, an optimized plan for applying systems engineering techniques was developed. The plan encompasses the following aspects: project organization, developing and defining requirements, assigning work responsibilities, evaluating system performance, quality assurance, controlling changes, enhancing licensability, optimizing project performance, and addressing regulatory issues. This information is presented in the form of a roadmap for the practical application of system engineering principles to a nuclear waste repository project

  11. Ergodic optimization in the expanding case concepts, tools and applications

    CERN Document Server

    Garibaldi, Eduardo

    2017-01-01

    This book focuses on the interpretation of ergodic optimal problems as questions of variational dynamics, employing a comparable approach to that of the Aubry-Mather theory for Lagrangian systems. Ergodic optimization is primarily concerned with the study of optimizing probability measures. This work presents and discusses the fundamental concepts of the theory, including the use and relevance of Sub-actions as analogues to subsolutions of the Hamilton-Jacobi equation. Further, it provides evidence for the impressively broad applicability of the tools inspired by the weak KAM theory.

  12. Topology optimization of metallic devices for microwave applications

    DEFF Research Database (Denmark)

    Aage, Niels; Mortensen, Asger; Sigmund, Ole

    2010-01-01

    is the skin depth, which calls for highly refined meshing in order to capture the physics. The skin depth problem has therefore prohibited the application of topology optimization to this class of problem. We present a design parameterization that remedies these numerical issues, by the interpolation...

  13. Application of Intervention Mapping to the Development of a Complex Physical Therapist Intervention.

    Science.gov (United States)

    Jones, Taryn M; Dear, Blake F; Hush, Julia M; Titov, Nickolai; Dean, Catherine M

    2016-12-01

    Physical therapist interventions, such as those designed to change physical activity behavior, are often complex and multifaceted. In order to facilitate rigorous evaluation and implementation of these complex interventions into clinical practice, the development process must be comprehensive, systematic, and transparent, with a sound theoretical basis. Intervention Mapping is designed to guide an iterative and problem-focused approach to the development of complex interventions. The purpose of this case report is to demonstrate the application of an Intervention Mapping approach to the development of a complex physical therapist intervention, a remote self-management program aimed at increasing physical activity after acquired brain injury. Intervention Mapping consists of 6 steps to guide the development of complex interventions: (1) needs assessment; (2) identification of outcomes, performance objectives, and change objectives; (3) selection of theory-based intervention methods and practical applications; (4) organization of methods and applications into an intervention program; (5) creation of an implementation plan; and (6) generation of an evaluation plan. The rationale and detailed description of this process are presented using an example of the development of a novel and complex physical therapist intervention, myMoves-a program designed to help individuals with an acquired brain injury to change their physical activity behavior. The Intervention Mapping framework may be useful in the development of complex physical therapist interventions, ensuring the development is comprehensive, systematic, and thorough, with a sound theoretical basis. This process facilitates translation into clinical practice and allows for greater confidence and transparency when the program efficacy is investigated. © 2016 American Physical Therapy Association.

  14. A buffer material optimal design in the radioactive wastes geological disposal using the satisficing trade-off method and the self-organizing map

    International Nuclear Information System (INIS)

    Okamoto, Takashi; Hanaoka, Yuya; Aiyoshi, Eitaro; Kobayashi, Yoko

    2012-01-01

    In this paper, we consider a multi-objective optimization method in order to obtain a preferred solution for the buffer material optimal design problem in the high-level radioactive wastes geological disposal. The buffer material optimal design problem is formulated as a constrained multi-objective optimization problem. Its Pareto optimal solutions are distributed evenly on whole bounds of the feasible region. Hence, we develop a search method to find a preferred solution easily for a decision maker from the Pareto optimal solutions which are distributed evenly and vastly. In the preferred solution search method, the visualization technique of a Pareto optimal solution set using the self-organizing map is introduced into the satisficing trade-off method which is the interactive method to obtain a Pareto optimal solution that satisfies a decision maker. We confirm the effectiveness of the preferred solution search method in the buffer material optimal design problem. (author)

  15. Smart "geomorphological" map browsing - a tale about geomorphological maps and the internet

    Science.gov (United States)

    Geilhausen, M.; Otto, J.-C.

    2012-04-01

    With the digital production of geomorphological maps, the dissemination of research outputs now extends beyond simple paper products. Internet technologies can contribute to both, the dissemination of geomorphological maps and access to geomorphologic data and help to make geomorphological knowledge available to a greater public. Indeed, many national geological surveys employ end-to-end digital workflows from data capture in the field to final map production and dissemination. This paper deals with the potential of web mapping applications and interactive, portable georeferenced PDF maps for the distribution of geomorphological information. Web mapping applications such as Google Maps have become very popular and widespread and increased the interest and access to mapping. They link the Internet with GIS technology and are a common way of presenting dynamic maps online. The GIS processing is performed online and maps are visualised in interactive web viewers characterised by different capabilities such as zooming, panning or adding further thematic layers, with the map refreshed after each task. Depending on the system architecture and the components used, advanced symbology, map overlays from different applications and sources and their integration into a Desktop GIS are possible. This interoperability is achieved through the use of international open standards that include mechanisms for the integration and visualisation of information from multiple sources. The portable document format (PDF) is commonly used for printing and is a standard format that can be processed by many graphic software and printers without loss of information. A GeoPDF enables the sharing of geospatial maps and data in PDF documents. Multiple, independent map frames with individual spatial reference systems are possible within a GeoPDF, for example, for map overlays or insets. Geospatial functionality of a GeoPDF includes scalable map display, layer visibility control, access to attribute

  16. A new automatic SAR-based flood mapping application hosted on the European Space Agency's grid processing on demand fast access to imagery environment

    Science.gov (United States)

    Hostache, Renaud; Chini, Marco; Matgen, Patrick; Giustarini, Laura

    2013-04-01

    There is a clear need for developing innovative processing chains based on earth observation (EO) data to generate products supporting emergency response and flood management at a global scale. Here an automatic flood mapping application is introduced. The latter is currently hosted on the Grid Processing on Demand (G-POD) Fast Access to Imagery (Faire) environment of the European Space Agency. The main objective of the online application is to deliver flooded areas using both recent and historical acquisitions of SAR data in an operational framework. It is worth mentioning that the method can be applied to both medium and high resolution SAR images. The flood mapping application consists of two main blocks: 1) A set of query tools for selecting the "crisis image" and the optimal corresponding pre-flood "reference image" from the G-POD archive. 2) An algorithm for extracting flooded areas using the previously selected "crisis image" and "reference image". The proposed method is a hybrid methodology, which combines histogram thresholding, region growing and change detection as an approach enabling the automatic, objective and reliable flood extent extraction from SAR images. The method is based on the calibration of a statistical distribution of "open water" backscatter values inferred from SAR images of floods. Change detection with respect to a pre-flood reference image helps reducing over-detection of inundated areas. The algorithms are computationally efficient and operate with minimum data requirements, considering as input data a flood image and a reference image. Stakeholders in flood management and service providers are able to log onto the flood mapping application to get support for the retrieval, from the rolling archive, of the most appropriate pre-flood reference image. Potential users will also be able to apply the implemented flood delineation algorithm. Case studies of several recent high magnitude flooding events (e.g. July 2007 Severn River flood

  17. Landslide Susceptibility Mapping Based on Selected Optimal Combination of Landslide Predisposing Factors in a Large Catchment

    Directory of Open Access Journals (Sweden)

    Qianqian Wang

    2015-12-01

    Full Text Available Landslides are usually initiated under complex geological conditions. It is of great significance to find out the optimal combination of predisposing factors and create an accurate landslide susceptibility map based on them. In this paper, the Information Value Model was modified to make the Modified Information Value (MIV Model, and together with GIS (Geographical Information System and AUC (Area Under Receiver Operating Characteristic Curve test, 32 factor combinations were evaluated separately, and factor combination group with members Slope, Lithology, Drainage network, Annual precipitation, Faults, Road and Vegetation was selected as the optimal combination group with an accuracy of 95.0%. Based on this group, a landslide susceptibility zonation map was drawn, where the study area was reclassified into five classes, presenting an accurate description of different levels of landslide susceptibility, with 79.41% and 13.67% of the validating field survey landslides falling in the Very High and High zones, respectively, mainly distributed in the south and southeast of the catchment. It showed that MIV model can tackle the problem of “no data in subclass” well, generate the true information value and show real running trend, which performs well in showing the relationship between predisposing factors and landslide occurrence and can be used for preliminary landslide susceptibility assessment in the study area.

  18. An Automated DAKOTA and VULCAN-CFD Framework with Application to Supersonic Facility Nozzle Flowpath Optimization

    Science.gov (United States)

    Axdahl, Erik L.

    2015-01-01

    Removing human interaction from design processes by using automation may lead to gains in both productivity and design precision. This memorandum describes efforts to incorporate high fidelity numerical analysis tools into an automated framework and applying that framework to applications of practical interest. The purpose of this effort was to integrate VULCAN-CFD into an automated, DAKOTA-enabled framework with a proof-of-concept application being the optimization of supersonic test facility nozzles. It was shown that the optimization framework could be deployed on a high performance computing cluster with the flow of information handled effectively to guide the optimization process. Furthermore, the application of the framework to supersonic test facility nozzle flowpath design and optimization was demonstrated using multiple optimization algorithms.

  19. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization.

    Science.gov (United States)

    Liu, Qingshan; Guo, Zhishan; Wang, Jun

    2012-02-01

    In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. A Study on Remote Probing Method for Drawing Ecology/Nature Map and the Application (III) - Drawing the Swamp Classification Map around River

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Seong Woo; Cho, Jeong Keon; Jeong, Hwi Chol [Korea Environment Institute, Seoul (Korea)

    2000-12-01

    The map of ecology/nature in the amended Natural Environment Conservation Act is the necessary data, which is drawn through assessing the national land with ecological factors, to execute the Korea's environmental policy. Such important ecology/nature map should be continuously revised and improved the reliability with adding several new factors. In this point of view, this study has the significance in presenting the improvement scheme of ecology/nature map. 'A Study on Remote Probing Method for Drawing Ecology/Nature Map and the Application' that has been performed for 3 years since 1998 has researched the drawing method of subject maps that could be built in a short time - a land-covering classification map, a vegetation classification map, and a swamp classification map around river - and the promoting principles hereafter. This study also presented the possibility and limit of classification by several satellite image data, so it would be a big help to build the subject map in the Government level. The land-covering classification map, a result of the first year, has been already being built by Ministry of Environment as a national project, and the improvement scheme of the vegetation map that was presented as a result of second year has been used in building the basic ecology/nature map. We hope that the results from this study will be applied as basic data to draw an ecology/nature map and contribute to expanding the understanding on the usefulness of the several ecosystem analysis methods with applying an ecology/nature map and a remote probe. 55 refs., 38 figs., 24 tabs.

  1. Optimal control for mathematical models of cancer therapies an application of geometric methods

    CERN Document Server

    Schättler, Heinz

    2015-01-01

    This book presents applications of geometric optimal control to real life biomedical problems with an emphasis on cancer treatments. A number of mathematical models for both classical and novel cancer treatments are presented as optimal control problems with the goal of constructing optimal protocols. The power of geometric methods is illustrated with fully worked out complete global solutions to these mathematically challenging problems. Elaborate constructions of optimal controls and corresponding system responses provide great examples of applications of the tools of geometric optimal control and the outcomes aid the design of simpler, practically realizable suboptimal protocols. The book blends mathematical rigor with practically important topics in an easily readable tutorial style. Graduate students and researchers in science and engineering, particularly biomathematics and more mathematical aspects of biomedical engineering, would find this book particularly useful.

  2. Innovative method for optimizing Side-Scan Sonar mapping: The blind band unveiled

    Science.gov (United States)

    Pergent, Gérard; Monnier, Briac; Clabaut, Philippe; Gascon, Gilles; Pergent-Martini, Christine; Valette-Sansevin, Audrey

    2017-07-01

    Over the past few years, the mapping of Mediterranean marine habitats has become a priority for scientists, environment managers and stakeholders, in particular in order to comply with European directives (Water Framework Directive and Marine Strategy Framework Directive) and to implement legislation to ensure their conservation. Side-scan sonar (SSS) is recognised as one of the most effective tool for underwater mapping. However, interpretation of acoustic data (sonograms) requires extensive field calibration and the ground-truthing process remains essential. Several techniques are commonly used, with sampling methods involving grabs, scuba diving observations or Remotely Operated Vehicle (ROV) underwater video recordings. All these techniques are time consuming, expensive and only provide sporadic informations. In the present study, the possibility of coupling a camera with a SSS and acquiring underwater videos in a continuous way has been tested. During the 'PosidCorse' oceanographic survey carried out along the eastern coast of Corsica, optical and acoustic data were respectively obtained using a GoPro™ camera and a Klein 3000™ SSS. Thereby, five profiles were performed between 10 and 50 m depth, corresponding to more than 20 km of data acquisition. The vertical images recorded with the camera fixed under the SSS and positioned facing downwards provided photo mosaics of very good quality corresponding to the entire sonograms's blind band. From the photo mosaics, 94% of the different bottom types and main habitats have been identified; specific structures linked to hydrodynamics conditions, anthropic and biological activities have also been observed as well as the substrate on which the Posidonia oceanica meadow grows. The association between acoustic data and underwater videos has proved to be a non-destructive and cost-effective method for ground-truthing in marine habitats mapping. Nevertheless, in order to optimize the results over the next surveys

  3. Discrimination of mixed quantum states. Reversible maps and unambiguous strategies

    Energy Technology Data Exchange (ETDEWEB)

    Kleinmann, Matthias

    2008-06-30

    commutators and allows an explicit construction of the (2 x 2)-dimensional blocks. As an important application of unambiguous state discrimination, unambiguous state comparison, i.e., the question whether two states are identical or not, is generalized and optimal measurements for this problem are constructed. If for a certain family of states, a physical device maps the input state to an output state, such that a second device can be built that yields back the original input state, such a map is called reversible on this family. With respect to state discrimination, such reversible maps are particularly interesting, if the output states are pure. A complete characterization of all families that allow such a reversible and purifying map is provided. If the states are mapped to pure states, but the map itself is not reversible, upper and lower bounds are analyzed for the ''deviation from perfect faithfulness'', a quantity which measures the deviation from a reversible mapping. (orig.)

  4. Computer determination of event maps with application to auxiliary supply systems

    International Nuclear Information System (INIS)

    Wredenberg, L.; Billinton, R.

    1975-01-01

    A method of evaluating the reliability of sequential operations in systems containing standby and alternate supply facilities is presented. The method is based upon the use of a digital computer for automatic development of event maps. The technique is illustrated by application to a nuclear power plant auxiliary supply system. (author)

  5. APPLICATION OF GENETIC ALGORITHMS FOR ROBUST PARAMETER OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    N. Belavendram

    2010-12-01

    Full Text Available Parameter optimization can be achieved by many methods such as Monte-Carlo, full, and fractional factorial designs. Genetic algorithms (GA are fairly recent in this respect but afford a novel method of parameter optimization. In GA, there is an initial pool of individuals each with its own specific phenotypic trait expressed as a ‘genetic chromosome’. Different genes enable individuals with different fitness levels to reproduce according to natural reproductive gene theory. This reproduction is established in terms of selection, crossover and mutation of reproducing genes. The resulting child generation of individuals has a better fitness level akin to natural selection, namely evolution. Populations evolve towards the fittest individuals. Such a mechanism has a parallel application in parameter optimization. Factors in a parameter design can be expressed as a genetic analogue in a pool of sub-optimal random solutions. Allowing this pool of sub-optimal solutions to evolve over several generations produces fitter generations converging to a pre-defined engineering optimum. In this paper, a genetic algorithm is used to study a seven factor non-linear equation for a Wheatstone bridge as the equation to be optimized. A comparison of the full factorial design against a GA method shows that the GA method is about 1200 times faster in finding a comparable solution.

  6. Development and Integration of Genome-Wide Polymorphic Microsatellite Markers onto a Reference Linkage Map for Constructing a High-Density Genetic Map of Chickpea.

    Directory of Open Access Journals (Sweden)

    Yash Paul Khajuria

    highest number of new sequence-based robust microsatellite markers (634 which is an advancement over the previously documented (~300 markers inter-specific genetic maps. This advanced high-density map will serve as a foundation for large-scale marker validation and genotyping applications, including identification and targeted mapping of trait-specific genes/QTLs (quantitative trait loci with sub-optimal use of resources and labour in chickpea.

  7. A generalized Jensen type mapping and its applications

    Directory of Open Access Journals (Sweden)

    Ali Ebadian

    2015-02-01

    Full Text Available Let $X$ and $Y$ be vector spaces. It is shown that a mapping $f : X \\rightarrow Y$ satisfies the functional equation (2d+1 f(\\frac{\\sum_{j=1}^{2d+1} (-1^{j+1} x_j}{2d+1} = \\sum_{j=1}^{2d+1} (-1^{j+1} f(x_j \\end{aligned} if and only if the mapping $f : X \\rightarrow Y$ is additive, and prove the Cauchy-Rassias stability of the functional equation $(0.1$ in Banach modules over a unital $C^*$-algebra, and in Poisson Banach modules over a unital Poisson $C^*$-algebra. Let $A$ and $B$ be unital $C^*$-algebras, Poisson $C^*$-algebras, Poisson $JC^*$-algebras or Lie $JC^*$-algebras. As an application, we show that every almost homomorphism $h : A \\rightarrow B$ is a homomorphism when $h((2d+1^n u y = h((2d+1^n u h(y$ or $h((2d+1^n u \\circ y = h((2d+1^n u\\circ h(y$ for all unitaries $u \\in A$, all $y \\in A$, and $n = 0, 1, 2, \\cdots$, and that every almost linear almost multiplicative mapping $h : A \\rightarrow B$ is a homomorphism when $h((2d+1 x = (2d+1 h(x$ for all $x \\in A$. Moreover, we prove the Cauchy-Rassias stability of homomorphisms in $C^*$-algebras, Poisson $C^*$-algebras, Poisson $JC^*$-algebras or Lie $JC^*$-algebras, and of Lie $JC^*$-algebra derivations in Lie $JC^*$-algebras.

  8. Detecting unstable periodic orbits of nonlinear mappings by a novel quantum-behaved particle swarm optimization non-Lyapunov way

    International Nuclear Information System (INIS)

    Gao Fei; Gao Hongrui; Li Zhuoqiu; Tong Hengqing; Lee, Ju-Jang

    2009-01-01

    It is well known that set of unstable periodic orbits (UPOs) can be thought of as the skeleton for the dynamics. However, detecting UPOs of nonlinear map is one of the most challenging problems of nonlinear science in both numerical computations and experimental measures. In this paper, a new method is proposed to detect the UPOs in a non-Lyapunov way. Firstly three special techniques are added to quantum-behaved particle swarm optimization (QPSO), a novel mbest particle, contracting the searching space self-adaptively and boundaries restriction (NCB), then the new method NCB-QPSO is proposed. It can maintain an effective search mechanism with fine equilibrium between exploitation and exploration. Secondly, the problems of detecting the UPOs are converted into a non-negative functions' minimization through a proper translation in a non-Lyapunov way. Thirdly the simulations to 6 benchmark optimization problems and different high order UPOs of 5 classic nonlinear maps are done by the proposed method. And the results show that NCB-QPSO is a successful method in detecting the UPOs, and it has the advantages of fast convergence, high precision and robustness.

  9. Mapping the Stacks: Sustainability and User Experience of Animated Maps in Library Discovery Interfaces

    Science.gov (United States)

    McMillin, Bill; Gibson, Sally; MacDonald, Jean

    2016-01-01

    Animated maps of the library stacks were integrated into the catalog interface at Pratt Institute and into the EBSCO Discovery Service interface at Illinois State University. The mapping feature was developed for optimal automation of the update process to enable a range of library personnel to update maps and call-number ranges. The development…

  10. Application de la théorie des graphes au traitement de la carte géologique Applying the Theory of Graphs to the Treatment of Geological Maps

    Directory of Open Access Journals (Sweden)

    Bouillé F.

    2006-11-01

    Full Text Available La saisie des informations d'une carte géologique par les méthodes classiques (grilles ou relevés aléatoires de courbes ne constitue pas une base de données opérationnelle. Par contre, l'assimilation des limites géologiques à un graphe orienté répond aux critères d'optimalité (encombrement très réduit, temps minimal, fiabilité, et permet une digitalisation rationnelle de la carte, une bonne structuration du fichier, et la réalisation d'applications intéressantes : restitutions graphiques sélectives à toutes échelles, calculs de pendages, surfaces, volumes, études de corrélation. Nous avons donc établi une chaîne de traitement de la carte géologique dont chaque maillon (saisie des informations; contrôle, mise à jour, consultation, application opère sur un ou plusieurs graphes. Obtaining data from geological maps by conventional methods (grids or random curve plotting is not an operational data base. However, the comparison of geological boundaries with a directional graph meets criteria of optimalness (very small bulk, minimum time, reliability and makes it possible to digitize the map rationally, to structure the file properly and to achieve significant applications such as selective graph plotting on all scales, calculating dips, areas and volumes, and making correlotion analyses. Therefore, we worked out a geological map processing sequence in which each element (data acquisition, checking, updating, consulting, applications operates on one or several graphs.

  11. Spectral edge: gradient-preserving spectral mapping for image fusion.

    Science.gov (United States)

    Connah, David; Drew, Mark S; Finlayson, Graham D

    2015-12-01

    This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple "ansatz" (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any N-D image data to any M-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping N-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.

  12. Application of terrestrial laser scanning to the development and updating of the base map

    Science.gov (United States)

    Klapa, Przemysław; Mitka, Bartosz

    2017-06-01

    The base map provides basic information about land to individuals, companies, developers, design engineers, organizations, and government agencies. Its contents include spatial location data for control network points, buildings, land lots, infrastructure facilities, and topographic features. As the primary map of the country, it must be developed in accordance with specific laws and regulations and be continuously updated. The base map is a data source used for the development and updating of derivative maps and other large scale cartographic materials such as thematic or topographic maps. Thanks to the advancement of science and technology, the quality of land surveys carried out by means of terrestrial laser scanning (TLS) matches that of traditional surveying methods in many respects. This paper discusses the potential application of output data from laser scanners (point clouds) to the development and updating of cartographic materials, taking Poland's base map as an example. A few research sites were chosen to present the method and the process of conducting a TLS land survey: a fragment of a residential area, a street, the surroundings of buildings, and an undeveloped area. The entire map that was drawn as a result of the survey was checked by comparing it to a map obtained from PODGiK (pol. Powiatowy Ośrodek Dokumentacji Geodezyjnej i Kartograficznej - Regional Centre for Geodetic and Cartographic Records) and by conducting a field inspection. An accuracy and quality analysis of the conducted fieldwork and deskwork yielded very good results, which provide solid grounds for predicating that cartographic materials based on a TLS point cloud are a reliable source of information about land. The contents of the map that had been created with the use of the obtained point cloud were very accurately located in space (x, y, z). The conducted accuracy analysis and the inspection of the performed works showed that high quality is characteristic of TLS surveys. The

  13. The optimization of concrete mixtures for use in highway applications

    Science.gov (United States)

    Moini, Mohamadreza

    Portland cement concrete is most used commodity in the world after water. Major part of civil and transportation infrastructure including bridges, roadway pavements, dams, and buildings is made of concrete. In addition to this, concrete durability is often of major concerns. In 2013 American Society of Civil Engineers (ASCE) estimated that an annual investment of 170 billion on roads and 20.5 billion for bridges is needed on an annual basis to substantially improve the condition of infrastructure. Same article reports that one-third of America's major roads are in poor or mediocre condition [1]. However, portland cement production is recognized with approximately one cubic meter of carbon dioxide emission. Indeed, the proper and systematic design of concrete mixtures for highway applications is essential as concrete pavements represent up to 60% of interstate highway systems with heavier traffic loads. Combined principles of material science and engineering can provide adequate methods and tools to facilitate the concrete design and improve the existing specifications. In the same manner, the durability must be addressed in the design and enhancement of long-term performance. Concrete used for highway pavement applications has low cement content and can be placed at low slump. However, further reduction of cement content (e.g., versus current specifications of Wisconsin Department of Transportation to 315-338 kg/m 3 (530-570 lb/yd3) for mainstream concrete pavements and 335 kg/m3 (565 lb/yd3) for bridge substructure and superstructures) requires delicate design of the mixture to maintain the expected workability, overall performance, and long-term durability in the field. The design includes, but not limited to optimization of aggregates, supplementary cementitious materials (SCMs), chemical and air-entraining admixtures. This research investigated various theoretical and experimental methods of aggregate optimization applicable for the reduction of cement content

  14. IMRT fluence map editing to control hot and cold spots

    International Nuclear Information System (INIS)

    Taylor Cook, J.; Tobler, Matt; Leavitt, Dennis D.; Watson, Gordon

    2005-01-01

    Manually editing intensity-modulated radiation therapy (IMRT) fluence maps effectively controls hot and cold spots that the IMRT optimization cannot control. Many times, re-optimizing does not reduce the hot spots or increase the cold spots. In fact, re-optimizing only places the hot and cold spots in different locations. Fluence-map editing provides manual control of dose delivery and provides the best treatment plan possible. Several IMRT treatments were planned using the Varian Eclipse planning system. We compare the effects on dose distributions between fluence-map editing and re-optimization, discuss techniques for fluence-map editing, and analyze differences between fluence editing on one beam vs. multiple beams. When editing a beam's fluence map, it is essential to choose a beam that least affects dose to the tumor and critical structures. Editing fluence maps gives an advantage in treatment planning and provides controlled delivery of IMRT dose

  15. UAV Deployment Exercise for Mapping Purposes: Evaluation of Emergency Response Applications

    Directory of Open Access Journals (Sweden)

    Piero Boccardo

    2015-07-01

    Full Text Available Exploiting the decrease of costs related to UAV technology, the humanitarian community started piloting the use of similar systems in humanitarian crises several years ago in different application fields, i.e., disaster mapping and information gathering, community capacity building, logistics and even transportation of goods. Part of the author’s group, composed of researchers in the field of applied geomatics, has been piloting the use of UAVs since 2006, with a specific focus on disaster management application. In the framework of such activities, a UAV deployment exercise was jointly organized with the Regional Civil Protection authority, mainly aimed at assessing the operational procedures to deploy UAVs for mapping purposes and the usability of the acquired data in an emergency response context. In the paper the technical features of the UAV platforms will be described, comparing the main advantages/disadvantages of fixed-wing versus rotor platforms. The main phases of the adopted operational procedure will be discussed and assessed especially in terms of time required to carry out each step, highlighting potential bottlenecks and in view of the national regulation framework, which is rapidly evolving. Different methodologies for the processing of the acquired data will be described and discussed, evaluating the fitness for emergency response applications.

  16. UAV Deployment Exercise for Mapping Purposes: Evaluation of Emergency Response Applications.

    Science.gov (United States)

    Boccardo, Piero; Chiabrando, Filiberto; Dutto, Furio; Tonolo, Fabio Giulio; Lingua, Andrea

    2015-07-02

    Exploiting the decrease of costs related to UAV technology, the humanitarian community started piloting the use of similar systems in humanitarian crises several years ago in different application fields, i.e., disaster mapping and information gathering, community capacity building, logistics and even transportation of goods. Part of the author's group, composed of researchers in the field of applied geomatics, has been piloting the use of UAVs since 2006, with a specific focus on disaster management application. In the framework of such activities, a UAV deployment exercise was jointly organized with the Regional Civil Protection authority, mainly aimed at assessing the operational procedures to deploy UAVs for mapping purposes and the usability of the acquired data in an emergency response context. In the paper the technical features of the UAV platforms will be described, comparing the main advantages/disadvantages of fixed-wing versus rotor platforms. The main phases of the adopted operational procedure will be discussed and assessed especially in terms of time required to carry out each step, highlighting potential bottlenecks and in view of the national regulation framework, which is rapidly evolving. Different methodologies for the processing of the acquired data will be described and discussed, evaluating the fitness for emergency response applications.

  17. Application of Generalized Mie Theory to EELS Calculations as a Tool for Optimization of Plasmonic Structures

    DEFF Research Database (Denmark)

    Thomas, Stefan; Matyssek, Christian; Hergert, Wolfram

    2015-01-01

    Technical applications of plasmonic nanostructures require a careful structural optimization with respect to the desired functionality. The success of such optimizations strongly depends on the applied method. We extend the generalized multiparticle Mie (GMM) computational electromagnetic method ...... by the application of genetic algorithms combined with a simplex algorithm. The scheme is applied to the design of plasmonic filters.......Technical applications of plasmonic nanostructures require a careful structural optimization with respect to the desired functionality. The success of such optimizations strongly depends on the applied method. We extend the generalized multiparticle Mie (GMM) computational electromagnetic method...

  18. Optimal control theory applications to management science and economics

    CERN Document Server

    Sethi, Suresh P

    2006-01-01

    Optimal control methods are used to determine the best ways to control a dynamic system. This book applies theoretical work to business management problems developed from the authors' research and classroom instruction. The thoroughly revised new edition has been refined with careful attention to the text and graphic material presentation. Chapters cover a range of topics including finance, production and inventory problems, marketing problems, machine maintenance and replacement, problems of optimal consumption of natural resources, and applications of control theory to economics. The book in

  19. An Improved Fruit Fly Optimization Algorithm and Its Application in Heat Exchange Fouling Ultrasonic Detection

    Directory of Open Access Journals (Sweden)

    Xia Li

    2018-01-01

    Full Text Available Inspired by the basic theory of Fruit Fly Optimization Algorithm, in this paper, cat mapping was added to the original algorithm, and the individual distribution and evolution mechanism of fruit fly population were improved in order to increase the search speed and accuracy. The flowchart of the improved algorithm was drawn to show its procedure. Using classical test functions, simulation optimization results show that the improved algorithm has faster and more reliable optimization ability. The algorithm was then combined with sparse decomposition theory and used in processing fouling detection ultrasonic signals to verify the validity and practicability of the improved algorithm.

  20. Accurate atom-mapping computation for biochemical reactions.

    Science.gov (United States)

    Latendresse, Mario; Malerich, Jeremiah P; Travers, Mike; Karp, Peter D

    2012-11-26

    The complete atom mapping of a chemical reaction is a bijection of the reactant atoms to the product atoms that specifies the terminus of each reactant atom. Atom mapping of biochemical reactions is useful for many applications of systems biology, in particular for metabolic engineering where synthesizing new biochemical pathways has to take into account for the number of carbon atoms from a source compound that are conserved in the synthesis of a target compound. Rapid, accurate computation of the atom mapping(s) of a biochemical reaction remains elusive despite significant work on this topic. In particular, past researchers did not validate the accuracy of mapping algorithms. We introduce a new method for computing atom mappings called the minimum weighted edit-distance (MWED) metric. The metric is based on bond propensity to react and computes biochemically valid atom mappings for a large percentage of biochemical reactions. MWED models can be formulated efficiently as Mixed-Integer Linear Programs (MILPs). We have demonstrated this approach on 7501 reactions of the MetaCyc database for which 87% of the models could be solved in less than 10 s. For 2.1% of the reactions, we found multiple optimal atom mappings. We show that the error rate is 0.9% (22 reactions) by comparing these atom mappings to 2446 atom mappings of the manually curated Kyoto Encyclopedia of Genes and Genomes (KEGG) RPAIR database. To our knowledge, our computational atom-mapping approach is the most accurate and among the fastest published to date. The atom-mapping data will be available in the MetaCyc database later in 2012; the atom-mapping software will be available within the Pathway Tools software later in 2012.

  1. The application of water poverty mapping in water management

    Directory of Open Access Journals (Sweden)

    Charles van der Vyver

    2012-07-01

    Full Text Available Water management has been carried out for many centuries wherever there has been a need to provide water to large numbers of people. Complex social norms have developed around water management and competing users have established political (governance and economic cooperative relationships. For example, community-managed irrigation schemes in Bali and the cloud-collection canals built by the Incas at Inca Pirca in Peru are examples of water management systems which still currently supply water to people (Sullivan et al., 2005. Water resources will steadily decline because of population growth, pollution and expected climate change (Hemson et al., 2008. It has been estimated that the global demand for water doubles approximately every two decades (Meyer, 2007 and that water will even become as expensive as oil in the future (Holland, 2005. “In the year 2000, global water use was twice as high as it was in 1960” (Clarke and King, 2004:19. Unfortunately this trend is expected to continue. The aim of this paper is to describe how water poverty mapping as a process can be used to assist the management of our already scarce water resources. It constructs a water poverty map after which it describes its application at various management levels. The research indicates that the mapping process can be used to obtain more accurate predictions, as well as to form part of the master plan and integrated development plan documents. Keywords: Water management, water poverty mapping Disciplines: Water management, geographical information systems (GIS, poverty studies, decision support

  2. Application of Particle Swarm Optimization Algorithm for Optimizing ANN Model in Recognizing Ripeness of Citrus

    Science.gov (United States)

    Diyana Rosli, Anis; Adenan, Nur Sabrina; Hashim, Hadzli; Ezan Abdullah, Noor; Sulaiman, Suhaimi; Baharudin, Rohaiza

    2018-03-01

    This paper shows findings of the application of Particle Swarm Optimization (PSO) algorithm in optimizing an Artificial Neural Network that could categorize between ripeness and unripeness stage of citrus suhuensis. The algorithm would adjust the network connections weights and adapt its values during training for best results at the output. Initially, citrus suhuensis fruit’s skin is measured using optically non-destructive method via spectrometer. The spectrometer would transmit VIS (visible spectrum) photonic light radiation to the surface (skin of citrus) of the sample. The reflected light from the sample’s surface would be received and measured by the same spectrometer in terms of reflectance percentage based on VIS range. These measured data are used to train and test the best optimized ANN model. The accuracy is based on receiver operating characteristic (ROC) performance. The result outcomes from this investigation have shown that the achieved accuracy for the optimized is 70.5% with a sensitivity and specificity of 60.1% and 80.0% respectively.

  3. Optimization theory with applications

    CERN Document Server

    Pierre, Donald A

    1987-01-01

    Optimization principles are of undisputed importance in modern design and system operation. They can be used for many purposes: optimal design of systems, optimal operation of systems, determination of performance limitations of systems, or simply the solution of sets of equations. While most books on optimization are limited to essentially one approach, this volume offers a broad spectrum of approaches, with emphasis on basic techniques from both classical and modern work.After an introductory chapter introducing those system concepts that prevail throughout optimization problems of all typ

  4. On the applicability of numerical image mapping for PIV image analysis near curved interfaces

    International Nuclear Information System (INIS)

    Masullo, Alessandro; Theunissen, Raf

    2017-01-01

    This paper scrutinises the general suitability of image mapping for particle image velocimetry (PIV) applications. Image mapping can improve PIV measurement accuracy by eliminating overlap between the PIV interrogation windows and an interface, as illustrated by some examples in the literature. Image mapping transforms the PIV images using a curvilinear interface-fitted mesh prior to performing the PIV cross correlation. However, degrading effects due to particle image deformation and the Jacobian transformation inherent in the mapping along curvilinear grid lines have never been deeply investigated. Here, the implementation of image mapping from mesh generation to image resampling is presented in detail, and related error sources are analysed. Systematic comparison with standard PIV approaches shows that image mapping is effective only in a very limited set of flow conditions and geometries, and depends strongly on a priori knowledge of the boundary shape and streamlines. In particular, with strongly curved geometries or streamlines that are not parallel to the interface, the image-mapping approach is easily outperformed by more traditional image analysis methodologies invoking suitable spatial relocation of the obtained displacement vector. (paper)

  5. UAV MULTISPECTRAL SURVEY TO MAP SOIL AND CROP FOR PRECISION FARMING APPLICATIONS

    Directory of Open Access Journals (Sweden)

    G. Sona

    2016-06-01

    Full Text Available New sensors mounted on UAV and optimal procedures for survey, data acquisition and analysis are continuously developed and tested for applications in precision farming. Procedures to integrate multispectral aerial data about soil and crop and ground-based proximal geophysical data are a recent research topic aimed to delineate homogeneous zones for the management of agricultural inputs (i.e., water, nutrients. Multispectral and multitemporal orthomosaics were produced over a test field (a 100 m x 200 m plot within a maize field, to map vegetation and soil indices, as well as crop heights, with suitable ground resolution. UAV flights were performed in two moments during the crop season, before sowing on bare soil, and just before flowering when maize was nearly at the maximum height. Two cameras, for color (RGB and false color (NIR-RG images, were used. The images were processed in Agisoft Photoscan to produce Digital Surface Model (DSM of bare soil and crop, and multispectral orthophotos. To overcome some difficulties in the automatic searching of matching points for the block adjustment of the crop image, also the scientific software developed by Politecnico of Milan was used to enhance images orientation. Surveys and image processing are described, as well as results about classification of multispectral-multitemporal orthophotos and soil indices.

  6. Economic performances optimization of the transcritical Rankine cycle systems in geothermal application

    International Nuclear Information System (INIS)

    Yang, Min-Hsiung; Yeh, Rong-Hua

    2015-01-01

    Highlights: • The optimal economic performance of the TRC system are investigated. • In economic evaluations, R125 performs the most satisfactorily, followed by R41 and CO 2 . • The TRC system with CO 2 has the largest averaged temperature difference. • Economic optimized pressures are always lower than thermodynamic optimized operating pressures. - Abstract: The aim of this study is to investigate the economic optimization of a TRC system for the application of geothermal energy. An economic parameter of net power output index, which is the ratio of net power output to the total cost, is applied to optimize the TRC system using CO 2 , R41 and R125 as working fluids. The maximum net power output index and the corresponding optimal operating pressures are obtained and evaluated for the TRC system. Furthermore, the analyses of the corresponding averaged temperature differences in the heat exchangers on the optimal economic performances of the TRC system are carried out. The effects of geothermal temperatures on the thermodynamic and economic optimizations are also revealed. In both optimal economic and thermodynamic evaluations, R125 performs the most satisfactorily, followed by R41 and CO 2 in the TRC system. In addition, the TRC system operated with CO 2 has the largest averaged temperature difference in the heat exchangers and thus has potential in future application for lower-temperature heat resources. The highest working pressures obtained from economic optimization are always lower than those from thermodynamic optimization for CO 2 , R41, and R125 in the TRC system

  7. A GIS application for assessing, mapping, and quantifying the social values of ecosystem services

    Science.gov (United States)

    Sherrouse, Benson C.; Clement, Jessica M.; Semmens, Darius J.

    2011-01-01

    As human pressures on ecosystems continue to increase, research involving the effective incorporation of social values information into the context of comprehensive ecosystem services assessments is becoming more important. Including quantified, spatially explicit social value metrics in such assessments will improve the analysis of relative tradeoffs among ecosystem services. This paper describes a GIS application, Social Values for Ecosystem Services (SolVES), developed to assess, map, and quantify the perceived social values of ecosystem services by deriving a non-monetary Value Index from responses to a public attitude and preference survey. SolVES calculates and maps the Value Index for social values held by various survey subgroups, as distinguished by their attitudes regarding ecosystem use. Index values can be compared within and among survey subgroups to explore the effect of social contexts on the valuation of ecosystem services. Index values can also be correlated and regressed against landscape metrics SolVES calculates from various environmental data layers. Coefficients derived through these analyses were applied to their corresponding data layers to generate a predicted social value map. This map compared favorably with other SolVES output and led to the addition of a predictive mapping function to SolVES for value transfer to areas where survey data are unavailable. A more robust application is being developed as a public domain tool for decision makers and researchers to map social values of ecosystem services and to facilitate discussions among diverse stakeholders involving relative tradeoffs among different ecosystem services in a variety of physical and social contexts.

  8. Optimization of the sampling scheme for maps of physical and chemical properties estimated by kriging

    Directory of Open Access Journals (Sweden)

    Gener Tadeu Pereira

    2013-10-01

    Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.

  9. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.

    Directory of Open Access Journals (Sweden)

    Xiao-Lin Wu

    Full Text Available Low-density (LD single nucleotide polymorphism (SNP arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD or high-density (HD SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE or haplotype-averaged Shannon entropy (HASE and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus

  10. Experimental reversion of the optimal quantum cloning and flipping processes

    International Nuclear Information System (INIS)

    Sciarrino, Fabio; Secondi, Veronica; De Martini, Francesco

    2006-01-01

    The quantum cloner machine maps an unknown arbitrary input qubit into two optimal clones and one optimal flipped qubit. By combining linear and nonlinear optical methods we experimentally implement a scheme that, after the cloning transformation, restores the original input qubit in one of the output channels, by using local measurements, classical communication, and feedforward. This nonlocal method demonstrates how the information on the input qubit can be restored after the cloning process. The realization of the reversion process is expected to find useful applications in the field of modern multipartite quantum cryptography

  11. Quantitative Trait Loci Mapping Problem: An Extinction-Based Multi-Objective Evolutionary Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Nicholas S. Flann

    2013-09-01

    Full Text Available The Quantitative Trait Loci (QTL mapping problem aims to identify regions in the genome that are linked to phenotypic features of the developed organism that vary in degree. It is a principle step in determining targets for further genetic analysis and is key in decoding the role of specific genes that control quantitative traits within species. Applications include identifying genetic causes of disease, optimization of cross-breeding for desired traits and understanding trait diversity in populations. In this paper a new multi-objective evolutionary algorithm (MOEA method is introduced and is shown to increase the accuracy of QTL mapping identification for both independent and epistatic loci interactions. The MOEA method optimizes over the space of possible partial least squares (PLS regression QTL models and considers the conflicting objectives of model simplicity versus model accuracy. By optimizing for minimal model complexity, MOEA has the advantage of solving the over-fitting problem of conventional PLS models. The effectiveness of the method is confirmed by comparing the new method with Bayesian Interval Mapping approaches over a series of test cases where the optimal solutions are known. This approach can be applied to many problems that arise in analysis of genomic data sets where the number of features far exceeds the number of observations and where features can be highly correlated.

  12. A novel chaotic particle swarm optimization approach using Henon map and implicit filtering local search for economic load dispatch

    International Nuclear Information System (INIS)

    Coelho, Leandro dos Santos; Mariani, Viviana Cocco

    2009-01-01

    Particle swarm optimization (PSO) is a population-based swarm intelligence algorithm driven by the simulation of a social psychological metaphor instead of the survival of the fittest individual. Based on the chaotic systems theory, this paper proposed a novel chaotic PSO combined with an implicit filtering (IF) local search method to solve economic dispatch problems. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed PSO introduces chaos mapping using Henon map sequences which increases its convergence rate and resulting precision. The chaotic PSO approach is used to produce good potential solutions, and the IF is used to fine-tune of final solution of PSO. The hybrid methodology is validated for a test system consisting of 13 thermal units whose incremental fuel cost function takes into account the valve-point loading effects. Simulation results are promising and show the effectiveness of the proposed approach.

  13. Insurance Applications of Active Fault Maps Showing Epistemic Uncertainty

    Science.gov (United States)

    Woo, G.

    2005-12-01

    Insurance loss modeling for earthquakes utilizes available maps of active faulting produced by geoscientists. All such maps are subject to uncertainty, arising from lack of knowledge of fault geometry and rupture history. Field work to undertake geological fault investigations drains human and monetary resources, and this inevitably limits the resolution of fault parameters. Some areas are more accessible than others; some may be of greater social or economic importance than others; some areas may be investigated more rapidly or diligently than others; or funding restrictions may have curtailed the extent of the fault mapping program. In contrast with the aleatory uncertainty associated with the inherent variability in the dynamics of earthquake fault rupture, uncertainty associated with lack of knowledge of fault geometry and rupture history is epistemic. The extent of this epistemic uncertainty may vary substantially from one regional or national fault map to another. However aware the local cartographer may be, this uncertainty is generally not conveyed in detail to the international map user. For example, an area may be left blank for a variety of reasons, ranging from lack of sufficient investigation of a fault to lack of convincing evidence of activity. Epistemic uncertainty in fault parameters is of concern in any probabilistic assessment of seismic hazard, not least in insurance earthquake risk applications. A logic-tree framework is appropriate for incorporating epistemic uncertainty. Some insurance contracts cover specific high-value properties or transport infrastructure, and therefore are extremely sensitive to the geometry of active faulting. Alternative Risk Transfer (ART) to the capital markets may also be considered. In order for such insurance or ART contracts to be properly priced, uncertainty should be taken into account. Accordingly, an estimate is needed for the likelihood of surface rupture capable of causing severe damage. Especially where a

  14. Semilinear Kolmogorov Equations and Applications to Stochastic Optimal Control

    International Nuclear Information System (INIS)

    Masiero, Federica

    2005-01-01

    Semilinear parabolic differential equations are solved in a mild sense in an infinite-dimensional Hilbert space. Applications to stochastic optimal control problems are studied by solving the associated Hamilton-Jacobi-Bellman equation. These results are applied to some controlled stochastic partial differential equations

  15. Final Report of Optimization Algorithms for Hierarchical Problems, with Applications to Nanoporous Materials

    Energy Technology Data Exchange (ETDEWEB)

    Nash, Stephen G.

    2013-11-11

    The research focuses on the modeling and optimization of nanoporous materials. In systems with hierarchical structure that we consider, the physics changes as the scale of the problem is reduced and it can be important to account for physics at the fine level to obtain accurate approximations at coarser levels. For example, nanoporous materials hold promise for energy production and storage. A significant issue is the fabrication of channels within these materials to allow rapid diffusion through the material. One goal of our research is to apply optimization methods to the design of nanoporous materials. Such problems are large and challenging, with hierarchical structure that we believe can be exploited, and with a large range of important scales, down to atomistic. This requires research on large-scale optimization for systems that exhibit different physics at different scales, and the development of algorithms applicable to designing nanoporous materials for many important applications in energy production, storage, distribution, and use. Our research has two major research thrusts. The first is hierarchical modeling. We plan to develop and study hierarchical optimization models for nanoporous materials. The models have hierarchical structure, and attempt to balance the conflicting aims of model fidelity and computational tractability. In addition, we analyze the general hierarchical model, as well as the specific application models, to determine their properties, particularly those properties that are relevant to the hierarchical optimization algorithms. The second thrust was to develop, analyze, and implement a class of hierarchical optimization algorithms, and apply them to the hierarchical models we have developed. We adapted and extended the optimization-based multigrid algorithms of Lewis and Nash to the optimization models exemplified by the hierarchical optimization model. This class of multigrid algorithms has been shown to be a powerful tool for

  16. Method for the visualization of landform by mapping using low altitude UAV application

    Science.gov (United States)

    Sharan Kumar, N.; Ashraf Mohamad Ismail, Mohd; Sukor, Nur Sabahiah Abdul; Cheang, William

    2018-05-01

    Unmanned Aerial Vehicle (UAV) and Digital Photogrammetry are evolving drastically in mapping technology. The significance and necessity for digital landform mapping are developing with years. In this study, a mapping workflow is applied to obtain two different input data sets which are the orthophoto and DSM. A fine flying technology is used to capture Low Altitude Aerial Photography (LAAP). Low altitude UAV (Drone) with the fixed advanced camera was utilized for imagery while computerized photogrammetry handling using Photo Scan was applied for cartographic information accumulation. The data processing through photogrammetry and orthomosaic processes is the main applications. High imagery quality is essential for the effectiveness and nature of normal mapping output such as 3D model, Digital Elevation Model (DEM), Digital Surface Model (DSM) and Ortho Images. The exactitude of Ground Control Points (GCP), flight altitude and the resolution of the camera are essential for good quality DEM and Orthophoto.

  17. Extremum-Seeking Control and Applications A Numerical Optimization-Based Approach

    CERN Document Server

    Zhang, Chunlei

    2012-01-01

    Extremum seeking control tracks a varying maximum or minimum in a performance function such as a cost. It attempts to determine the optimal performance of a control system as it operates, thereby reducing downtime and the need for system analysis. Extremum Seeking Control and Applications is divided into two parts. In the first, the authors review existing analog optimization based extremum seeking control including gradient, perturbation and sliding mode based control designs. They then propose a novel numerical optimization based extremum seeking control based on optimization algorithms and state regulation. This control design is developed for simple linear time-invariant systems and then extended for a class of feedback linearizable nonlinear systems. The two main optimization algorithms – line search and trust region methods – are analyzed for robustness. Finite-time and asymptotic state regulators are put forward for linear and nonlinear systems respectively. Further design flexibility is achieved u...

  18. Design and optimisation of a pulsed CO2 laser for laser ultrasonic applications

    CSIR Research Space (South Africa)

    Forbes, A

    2006-07-01

    Full Text Available at the material surface is detected and converted into a defect map across the aircraft. The design and optimization of a laser system for this application, together with the basic science involved, is reviewed in this paper. This includes the optimization...

  19. Adaptive treatment-length optimization in spatiobiologically integrated radiotherapy

    Science.gov (United States)

    Ajdari, Ali; Ghate, Archis; Kim, Minsun

    2018-04-01

    Recent theoretical research on spatiobiologically integrated radiotherapy has focused on optimization models that adapt fluence-maps to the evolution of tumor state, for example, cell densities, as observed in quantitative functional images acquired over the treatment course. We propose an optimization model that adapts the length of the treatment course as well as the fluence-maps to such imaged tumor state. Specifically, after observing the tumor cell densities at the beginning of a session, the treatment planner solves a group of convex optimization problems to determine an optimal number of remaining treatment sessions, and a corresponding optimal fluence-map for each of these sessions. The objective is to minimize the total number of tumor cells remaining (TNTCR) at the end of this proposed treatment course, subject to upper limits on the biologically effective dose delivered to the organs-at-risk. This fluence-map is administered in future sessions until the next image is available, and then the number of sessions and the fluence-map are re-optimized based on the latest cell density information. We demonstrate via computer simulations on five head-and-neck test cases that such adaptive treatment-length and fluence-map planning reduces the TNTCR and increases the biological effect on the tumor while employing shorter treatment courses, as compared to only adapting fluence-maps and using a pre-determined treatment course length based on one-size-fits-all guidelines.

  20. Planning of the steam generators for nuclear applications using optimization techniques

    International Nuclear Information System (INIS)

    Sakai, M.; Silvares, O.M.

    1978-01-01

    Procedure for the maximization of the net power of a nuclear power plant through the application of the optimal control theory of dynamic systems is presented. The problem is formulated in the steam generator which links the primary and the secondary cycle. The solution of the steam generator, optimization problem is obtained simultaneously with the heat balance in both primary and secondary cycle, through an iterative process. By this way the optimal parameters are obtained for the steam generator, the vapor and the cooling gas cycle [pt

  1. Optimization of Feasibility Stage for Hydrogen/Deuterium Exchange Mass Spectrometry

    Science.gov (United States)

    Hamuro, Yoshitomo; Coales, Stephen J.

    2018-03-01

    The practice of HDX-MS remains somewhat difficult, not only for newcomers but also for veterans, despite its increasing popularity. While a typical HDX-MS project starts with a feasibility stage where the experimental conditions are optimized and the peptide map is generated prior to the HDX study stage, the literature usually reports only the HDX study stage. In this protocol, we describe a few considerations for the initial feasibility stage, more specifically, how to optimize quench conditions, how to tackle the carryover issue, and how to apply the pepsin specificity rule. Two sets of quench conditions are described depending on the presence of disulfide bonds to facilitate the quench condition optimization process. Four protocols are outlined to minimize carryover during the feasibility stage: (1) addition of a detergent to the quench buffer, (2) injection of a detergent or chaotrope to the protease column after each sample injection, (3) back-flushing of the trap column and the analytical column with a new plumbing configuration, and (4) use of PEEK (or PEEK coated) frits instead of stainless steel frits for the columns. The application of the pepsin specificity rule after peptide map generation and not before peptide map generation is suggested. The rule can be used not only to remove falsely identified peptides, but also to check the sample purity. A well-optimized HDX-MS feasibility stage makes subsequent HDX study stage smoother and the resulting HDX data more reliable. [Figure not available: see fulltext.

  2. Optimal design of permanent magnet flux switching generator for wind applications via artificial neural network and multi-objective particle swarm optimization hybrid approach

    International Nuclear Information System (INIS)

    Meo, Santolo; Zohoori, Alireza; Vahedi, Abolfazl

    2016-01-01

    Highlights: • A new optimal design of flux switching permanent magnet generator is developed. • A prototype is employed to validate numerical data used for optimization. • A novel hybrid multi-objective particle swarm optimization approach is proposed. • Optimization targets are weight, cost, voltage and its total harmonic distortion. • The hybrid approach preference is proved compared with other optimization methods. - Abstract: In this paper a new hybrid approach obtained combining a multi-objective particle swarm optimization and artificial neural network is proposed for the design optimization of a direct-drive permanent magnet flux switching generators for low power wind applications. The targets of the proposed multi-objective optimization are to reduce the costs and weight of the machine while maximizing the amplitude of the induced voltage as well as minimizing its total harmonic distortion. The permanent magnet width, the stator and rotor tooth width, the rotor teeth number and stator pole number of the machine define the search space for the optimization problem. Four supervised artificial neural networks are designed for modeling the complex relationships among the weight, the cost, the amplitude and the total harmonic distortion of the output voltage respect to the quantities of the search space. Finite element analysis is adopted to generate training dataset for the artificial neural networks. Finite element analysis based model is verified by experimental results with a 1.5 kW permanent magnet flux switching generator prototype suitable for renewable energy applications, having 6/19 stator poles/rotor teeth. Finally the effectiveness of the proposed hybrid procedure is compared with the results given by conventional multi-objective optimization algorithms. The obtained results show the soundness of the proposed multi objective optimization technique and its feasibility to be adopted as suitable methodology for optimal design of permanent

  3. Design of application specific long period waveguide grating filters using adaptive particle swarm optimization algorithms

    International Nuclear Information System (INIS)

    Semwal, Girish; Rastogi, Vipul

    2014-01-01

    We present design optimization of wavelength filters based on long period waveguide gratings (LPWGs) using the adaptive particle swarm optimization (APSO) technique. We demonstrate optimization of the LPWG parameters for single-band, wide-band and dual-band rejection filters for testing the convergence of APSO algorithms. After convergence tests on the algorithms, the optimization technique has been implemented to design more complicated application specific filters such as erbium doped fiber amplifier (EDFA) amplified spontaneous emission (ASE) flattening, erbium doped waveguide amplifier (EDWA) gain flattening and pre-defined broadband rejection filters. The technique is useful for designing and optimizing the parameters of LPWGs to achieve complicated application specific spectra. (paper)

  4. Geometrical conditions for completely positive trace-preserving maps and their application to a quantum repeater and a state-dependent quantum cloning machine

    International Nuclear Information System (INIS)

    Carlini, A.; Sasaki, M.

    2003-01-01

    We address the problem of finding optimal CPTP (completely positive trace-preserving) maps between a set of binary pure states and another set of binary generic mixed state in a two-dimensional space. The necessary and sufficient conditions for the existence of such CPTP maps can be discussed within a simple geometrical picture. We exploit this analysis to show the existence of an optimal quantum repeater which is superior to the known repeating strategies for a set of coherent states sent through a lossy quantum channel. We also show that the geometrical formulation of the CPTP mapping conditions can be a simpler method to derive a state-dependent quantum (anti) cloning machine than the study so far based on the explicit solution of several constraints imposed by unitarity in an extended Hilbert space

  5. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain

    Science.gov (United States)

    2011-01-01

    Background Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Methods Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. Results The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. Conclusions In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This

  6. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain.

    Science.gov (United States)

    Dominkovics, Pau; Granell, Carlos; Pérez-Navarro, Antoni; Casals, Martí; Orcau, Angels; Caylà, Joan A

    2011-11-29

    Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web

  7. Direct aperture optimization: A turnkey solution for step-and-shoot IMRT

    International Nuclear Information System (INIS)

    Shepard, D.M.; Earl, M.A.; Li, X.A.; Naqvi, S.; Yu, C.

    2002-01-01

    IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach 'direct aperture optimization'. This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT

  8. Rice genome mapping and its application in rice genetics and breeding

    International Nuclear Information System (INIS)

    Eun, M.Y.; Cho, Y.G.; Hahn, J.H.; Yoon, U.H.; Yi, B.Y.; Chung, T.Y.

    1998-01-01

    An 'MG' recombinant inbred population which consists of 164 F 13 lines has been developed from a cross between a Tongil type variety Milyang 23 and a Japonica type Gihobyeo by single seed descent. A Restriction Fragment Length Polymorphism (RFLP) framework map using this population has been constructed. Morphological markers, isozyme loci, microsatellites, Amplified Fragment Length Polymorphisms (AFLP), and new complementary DNA (cDNA) markers are being integrated in the framework map for a highly saturated comprehensive map. So far, 207 RFLPs, 89 microsatellites, 5 isozymes, 232 AFLPs, and 2 morphological markers have been mapped through international collaboration. The map contains 1,826 cM with an average interval size of 4.5 cM on the framework map and 3.4 cM overall (as of 29 October 1996). The framework map is being used for analyzing, quantitative trait loci (QTL) of agronomic characters and some physico-chemical properties relating to rice quality. The number of significant QTLs affecting each trait ranged from one to five, and 38 QTLs were detected for 17 traits. The percentage of variance explained by each QTL ranged from 5.6 to 66.9%. The isozyme marker, EstI-2, and two RFLP markers, RG109 and RG220, were linked most tightly at a distance less than 1 cM with the semidwarf (sd-1) gene on chromosome 1. These markers could be used for precise in vitro selection of individuals carrying the semidwarf gene using single seeds or very young leaf tissue, before this character is fully expressed. Appropriate application of marker-assisted selection, using EstI-2 and RFLP markers for the semidwarf character, in combination with other markers linked to genes of agronomic importance in rice, holds promise for improving, the efficiency of breeding, and the high-resolution genetic and physical mapping near sd-1, aimed at ultimately cloning this valuable gene

  9. Method for mapping population-based case-control studies: an application using generalized additive models

    Directory of Open Access Journals (Sweden)

    Aschengrau Ann

    2006-06-01

    Full Text Available Abstract Background Mapping spatial distributions of disease occurrence and risk can serve as a useful tool for identifying exposures of public health concern. Disease registry data are often mapped by town or county of diagnosis and contain limited data on covariates. These maps often possess poor spatial resolution, the potential for spatial confounding, and the inability to consider latency. Population-based case-control studies can provide detailed information on residential history and covariates. Results Generalized additive models (GAMs provide a useful framework for mapping point-based epidemiologic data. Smoothing on location while controlling for covariates produces adjusted maps. We generate maps of odds ratios using the entire study area as a reference. We smooth using a locally weighted regression smoother (loess, a method that combines the advantages of nearest neighbor and kernel methods. We choose an optimal degree of smoothing by minimizing Akaike's Information Criterion. We use a deviance-based test to assess the overall importance of location in the model and pointwise permutation tests to locate regions of significantly increased or decreased risk. The method is illustrated with synthetic data and data from a population-based case-control study, using S-Plus and ArcView software. Conclusion Our goal is to develop practical methods for mapping population-based case-control and cohort studies. The method described here performs well for our synthetic data, reproducing important features of the data and adequately controlling the covariate. When applied to the population-based case-control data set, the method suggests spatial confounding and identifies statistically significant areas of increased and decreased odds ratios.

  10. A guide of patent map

    International Nuclear Information System (INIS)

    1999-12-01

    This book introduces application and characteristic of patent information, types of patent information data and research of patent information, arrangement of patent information and patent map, analysis of patent information, necessity, writing period arrangement way of patent map, cases of patent map on selection of task of research and development, system of research and development and application, examples of PM such as MAP by year, application, technique, Inventor, and claim point map and computerization like data arrangement of PM patent, collection of analysis range and item analysis of patent, cases and written reports on patent analysis.

  11. A Fast Optimization Method for Reliability and Performance of Cloud Services Composition Application

    Directory of Open Access Journals (Sweden)

    Zhao Wu

    2013-01-01

    Full Text Available At present the cloud computing is one of the newest trends of distributed computation, which is propelling another important revolution of software industry. The cloud services composition is one of the key techniques in software development. The optimization for reliability and performance of cloud services composition application, which is a typical stochastic optimization problem, is confronted with severe challenges due to its randomness and long transaction, as well as the characteristics of the cloud computing resources such as openness and dynamic. The traditional reliability and performance optimization techniques, for example, Markov model and state space analysis and so forth, have some defects such as being too time consuming and easy to cause state space explosion and unsatisfied the assumptions of component execution independence. To overcome these defects, we propose a fast optimization method for reliability and performance of cloud services composition application based on universal generating function and genetic algorithm in this paper. At first, a reliability and performance model for cloud service composition application based on the multiple state system theory is presented. Then the reliability and performance definition based on universal generating function is proposed. Based on this, a fast reliability and performance optimization algorithm is presented. In the end, the illustrative examples are given.

  12. Overlay improvement by exposure map based mask registration optimization

    Science.gov (United States)

    Shi, Irene; Guo, Eric; Chen, Ming; Lu, Max; Li, Gordon; Li, Rivan; Tian, Eric

    2015-03-01

    Along with the increased miniaturization of semiconductor electronic devices, the design rules of advanced semiconductor devices shrink dramatically. [1] One of the main challenges of lithography step is the layer-to-layer overlay control. Furthermore, DPT (Double Patterning Technology) has been adapted for the advanced technology node like 28nm and 14nm, corresponding overlay budget becomes even tighter. [2][3] After the in-die mask registration (pattern placement) measurement is introduced, with the model analysis of a KLA SOV (sources of variation) tool, it's observed that registration difference between masks is a significant error source of wafer layer-to-layer overlay at 28nm process. [4][5] Mask registration optimization would highly improve wafer overlay performance accordingly. It was reported that a laser based registration control (RegC) process could be applied after the pattern generation or after pellicle mounting and allowed fine tuning of the mask registration. [6] In this paper we propose a novel method of mask registration correction, which can be applied before mask writing based on mask exposure map, considering the factors of mask chip layout, writing sequence, and pattern density distribution. Our experiment data show if pattern density on the mask keeps at a low level, in-die mask registration residue error in 3sigma could be always under 5nm whatever blank type and related writer POSCOR (position correction) file was applied; it proves random error induced by material or equipment would occupy relatively fixed error budget as an error source of mask registration. On the real production, comparing the mask registration difference through critical production layers, it could be revealed that registration residue error of line space layers with higher pattern density is always much larger than the one of contact hole layers with lower pattern density. Additionally, the mask registration difference between layers with similar pattern density

  13. Web mapping application of Roman Catholic Church administration in the Czech lands in the early modern period

    Directory of Open Access Journals (Sweden)

    Pavel Seemann

    2017-03-01

    Full Text Available Reconstruction of historical spatial relationships is still a topical issue in historical geography. In this respect, the Church history has not been well explored. The parish administration in the Czech lands is evolving since the advent of Christianity in 863, and a number of reforms have passed over the centuries. Significant changes in the administration also underwent during recatholisation of the Czech lands in the 17th and 18thcentury. From this Baroque era, there are written sources which have been preserved, so they can be utilized to reconstruct historical Church administration in the form of web mapping application. The paper briefly introduces methods which were used to build a spatial database filled with historical data. However, the main outcome of this paper is to describe the creation of the web mapping application that provides visualisation of this data. Discussed are topics like cartographic project, choice of map symbols, data generalization for different levels of detail and placement of annotations. Display of cartographic data were performed using the ArcGIS platform, through a combination of map tiles and feature services that are bundled into the application template created in Web AppBuilder.

  14. Shape signature based on Ricci flow and optimal mass transportation

    Science.gov (United States)

    Luo, Wei; Su, Zengyu; Zhang, Min; Zeng, Wei; Dai, Junfei; Gu, Xianfeng

    2014-11-01

    A shape signature based on surface Ricci flow and optimal mass transportation is introduced for the purpose of surface comparison. First, the surface is conformally mapped onto plane by Ricci flow, which induces a measure on the planar domain. Second, the unique optimal mass transport map is computed that transports the new measure to the canonical measure on the plane. The map is obtained by a convex optimization process. This optimal transport map encodes all the information of the Riemannian metric on the surface. The shape signature consists of the optimal transport map, together with the mean curvature, which can fully recover the original surface. The discrete theories of surface Ricci flow and optimal mass transportation are explained thoroughly. The algorithms are given in detail. The signature is tested on human facial surfaces with different expressions accquired by structured light 3-D scanner based on phase-shifting method. The experimental results demonstrate the efficiency and efficacy of the method.

  15. Landslide susceptibility map: from research to application

    Science.gov (United States)

    Fiorucci, Federica; Reichenbach, Paola; Ardizzone, Francesca; Rossi, Mauro; Felicioni, Giulia; Antonini, Guendalina

    2014-05-01

    Susceptibility map is an important and essential tool in environmental planning, to evaluate landslide hazard and risk and for a correct and responsible management of the territory. Landslide susceptibility is the likelihood of a landslide occurring in an area on the basis of local terrain conditions. Can be expressed as the probability that any given region will be affected by landslides, i.e. an estimate of "where" landslides are likely to occur. In this work we present two examples of landslide susceptibility map prepared for the Umbria Region and for the Perugia Municipality. These two maps were realized following official request from the Regional and Municipal government to the Research Institute for the Hydrogeological Protection (CNR-IRPI). The susceptibility map prepared for the Umbria Region represents the development of previous agreements focused to prepare: i) a landslide inventory map that was included in the Urban Territorial Planning (PUT) and ii) a series of maps for the Regional Plan for Multi-risk Prevention. The activities carried out for the Umbria Region were focused to define and apply methods and techniques for landslide susceptibility zonation. Susceptibility maps were prepared exploiting a multivariate statistical model (linear discriminant analysis) for the five Civil Protection Alert Zones defined in the regional territory. The five resulting maps were tested and validated using the spatial distribution of recent landslide events that occurred in the region. The susceptibility map for the Perugia Municipality was prepared to be integrated as one of the cartographic product in the Municipal development plan (PRG - Piano Regolatore Generale) as required by the existing legislation. At strategic level, one of the main objectives of the PRG, is to establish a framework of knowledge and legal aspects for the management of geo-hydrological risk. At national level most of the susceptibility maps prepared for the PRG, were and still are obtained

  16. CRISM Multispectral and Hyperspectral Mapping Data - A Global Data Set for Hydrated Mineral Mapping

    Science.gov (United States)

    Seelos, F. P.; Hash, C. D.; Murchie, S. L.; Lim, H.

    2017-12-01

    The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) is a visible through short-wave infrared hyperspectral imaging spectrometer (VNIR S-detector: 364-1055 nm; IR L-detector: 1001-3936 nm; 6.55 nm sampling) that has been in operation on the Mars Reconnaissance Orbiter (MRO) since 2006. Over the course of the MRO mission, CRISM has acquired 290,000 individual mapping observation segments (mapping strips) with a variety of observing modes and data characteristics (VNIR/IR; 100/200 m/pxl; multi-/hyper-spectral band selection) over a wide range of observing conditions (atmospheric state, observation geometry, instrument state). CRISM mapping data coverage density varies primarily with latitude and secondarily due to seasonal and operational considerations. The aggregate global IR mapping data coverage currently stands at 85% ( 80% at the equator with 40% repeat sampling), which is sufficient spatial sampling density to support the assembly of empirically optimized radiometrically consistent mapping mosaic products. The CRISM project has defined a number of mapping mosaic data products (e.g. Multispectral Reduced Data Record (MRDR) map tiles) with varying degrees of observation-specific processing and correction applied prior to mosaic assembly. A commonality among the mosaic products is the presence of inter-observation radiometric discrepancies which are traceable to variable observation circumstances or associated atmospheric/photometric correction residuals. The empirical approach to radiometric reconciliation leverages inter-observation spatial overlaps and proximal relationships to construct a graph that encodes the mosaic structure and radiometric discrepancies. The graph theory abstraction allows the underling structure of the msaic to be evaluated and the corresponding optimization problem configured so it is well-posed. Linear and non-linear least squares optimization is then employed to derive a set of observation- and wavelength- specific model

  17. Visibility-based optimal path and motion planning

    CERN Document Server

    Wang, Paul Keng-Chieh

    2015-01-01

    This monograph deals with various visibility-based path and motion planning problems motivated by real-world applications such as exploration and mapping planetary surfaces, environmental surveillance using stationary or mobile robots, and imaging of global air/pollutant circulation. The formulation and solution of these problems call for concepts and methods from many areas of applied mathematics including computational geometry, set-covering, non-smooth optimization, combinatorial optimization and optimal control. Emphasis is placed on the formulation of new problems and methods of approach to these problems. Since geometry and visualization play important roles in the understanding of these problems, intuitive interpretations of the basic concepts are presented before detailed mathematical development. The development of a particular topic begins with simple cases illustrated by specific examples, and then progresses forward to more complex cases. The intended readers of this monograph are primarily studen...

  18. Application of Nontraditional Optimization Techniques for Airfoil Shape Optimization

    Directory of Open Access Journals (Sweden)

    R. Mukesh

    2012-01-01

    Full Text Available The method of optimization algorithms is one of the most important parameters which will strongly influence the fidelity of the solution during an aerodynamic shape optimization problem. Nowadays, various optimization methods, such as genetic algorithm (GA, simulated annealing (SA, and particle swarm optimization (PSO, are more widely employed to solve the aerodynamic shape optimization problems. In addition to the optimization method, the geometry parameterization becomes an important factor to be considered during the aerodynamic shape optimization process. The objective of this work is to introduce the knowledge of describing general airfoil geometry using twelve parameters by representing its shape as a polynomial function and coupling this approach with flow solution and optimization algorithms. An aerodynamic shape optimization problem is formulated for NACA 0012 airfoil and solved using the methods of simulated annealing and genetic algorithm for 5.0 deg angle of attack. The results show that the simulated annealing optimization scheme is more effective in finding the optimum solution among the various possible solutions. It is also found that the SA shows more exploitation characteristics as compared to the GA which is considered to be more effective explorer.

  19. Myocardial T1 and T2 mapping: Techniques and clinical applications

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Pan Ki; Hong, Yoo Jin; Im, Dong Jin [Dept. of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul (Korea, Republic of); and others

    2017-01-15

    Cardiac magnetic resonance (CMR) imaging is widely used in various medical fields related to cardiovascular diseases. Rapid technological innovations in magnetic resonance imaging in recent times have resulted in the development of new techniques for CMR imaging. T1 and T2 image mapping sequences enable the direct quantification of T1, T2, and extracellular volume fraction (ECV) values of the myocardium, leading to the progressive integration of these sequences into routine CMR settings. Currently, T1, T2, and ECV values are being recognized as not only robust biomarkers for diagnosis of cardiomyopathies, but also predictive factors for treatment monitoring and prognosis. In this study, we have reviewed various T1 and T2 mapping sequence techniques and their clinical applications.

  20. Web mapping: tools and solutions for creating interactive maps of forestry interest

    Directory of Open Access Journals (Sweden)

    Notarangelo G

    2011-12-01

    Full Text Available The spread of geobrowsers as tools for displaying geographically referenced information provides insights and opportunities to those who, not being specialists in Geographic Information Systems, want to take advantage from exploration and communication power offered by these software. Through the use of web services such as Google Maps and the use of suitable markup languages, one can create interactive maps starting from highly heterogeneous data and information. These interactive maps can also be easily distributed and shared with Internet users, because they do not need to use proprietary software nor special skills but only a web browser. Unlike the maps created with GIS, whose output usually is a static image, the interactive maps retain all their features to users advantage. This paper describes a web application that, using the Keyhole Markup Language and the free service of Google Maps, produces choropleth maps relating to some forest indicators estimated by the last Italian National Forest Inventory. The creation of a map is done through a simple and intuitive interface. The maps created by users can be downloaded as KML file and can be viewed or modified via the freeware application Google Earth or free and open source GIS software like Quantum GIS. The web application is free and available at www.ricercaforestale.it.

  1. PID control design for chaotic synchronization using a tribes optimization approach

    Energy Technology Data Exchange (ETDEWEB)

    Santos Coelho, Leandro dos [Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Pontifical Catholic University of Parana, PUCPR, Imaculada Conceicao, 1155, 80215-901 Curitiba, Parana (Brazil)], E-mail: leandro.coelho@pucpr.br; Andrade Bernert, Diego Luis de [Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Pontifical Catholic University of Parana, PUCPR, Imaculada Conceicao, 1155, 80215-901 Curitiba, Parana (Brazil)], E-mail: dbernert@gmail.com

    2009-10-15

    Recently, the investigation of synchronization and control problems for discrete chaotic systems has stimulated a wide range of research activity including both theoretical studies and practical applications. This paper deals with the tuning of a proportional-integral-derivative (PID) controller using a modified Tribes optimization algorithm based on truncated chaotic Zaslavskii map (MTribes) for synchronization of two identical discrete chaotic systems subject the different initial conditions. The Tribes algorithm is inspired by the social behavior of bird flocking and is also an optimization adaptive procedure that does not require sociometric or swarm size parameter tuning. Numerical simulations are given to show the effectiveness of the proposed synchronization method. In addition, some comparisons of the MTribes optimization algorithm with other continuous optimization methods, including classical Tribes algorithm and particle swarm optimization approaches, are presented.

  2. PID control design for chaotic synchronization using a tribes optimization approach

    International Nuclear Information System (INIS)

    Santos Coelho, Leandro dos; Andrade Bernert, Diego Luis de

    2009-01-01

    Recently, the investigation of synchronization and control problems for discrete chaotic systems has stimulated a wide range of research activity including both theoretical studies and practical applications. This paper deals with the tuning of a proportional-integral-derivative (PID) controller using a modified Tribes optimization algorithm based on truncated chaotic Zaslavskii map (MTribes) for synchronization of two identical discrete chaotic systems subject the different initial conditions. The Tribes algorithm is inspired by the social behavior of bird flocking and is also an optimization adaptive procedure that does not require sociometric or swarm size parameter tuning. Numerical simulations are given to show the effectiveness of the proposed synchronization method. In addition, some comparisons of the MTribes optimization algorithm with other continuous optimization methods, including classical Tribes algorithm and particle swarm optimization approaches, are presented.

  3. Development of a software for reconstruction of X-ray fluorescence intensity maps

    International Nuclear Information System (INIS)

    Almeida, Andre Pereira de; Braz, Delson; Mota, Carla Lemos; Oliveira, Luis Fernando de; Barroso, Regina Cely; Pinto, Nivia Graciele Villela; Cardoso, Simone Coutinho; Moreira, Silvana

    2009-01-01

    The technique of X-ray fluorescence (XRF) using SR microbeams is a powerful analysis tool for studying elemental composition in several samples. One application of this technique is the analysis done through the mapping of chemical elements forming a matrix of data. The aim of this work is the presentation of the program MapXRF, an in-house software designed to optimize the processing and mapping of fluorescence intensities data. This program uses spectra generated by QXAS as input data and separates the intensities of each chemical element found in the fluorescence spectra in files themselves. From these files, the program generates the intensity maps that can be visualized in any program of treatment of images. The proposed software was tested using fluorescence data obtained in the XRF beamline of XRF at Synchrotron Light National Laboratory (LNLS), Brazil. Automatic 2D scans were performed and element distribution maps were obtained in the form of a matrix of data. (author)

  4. Development of a software for reconstruction of X-ray fluorescence intensity maps

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Andre Pereira de; Braz, Delson; Mota, Carla Lemos, E-mail: apalmeid@gmail.co, E-mail: delson@lin.ufrj.b, E-mail: clemos@con.ufrj.b [Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Oliveira, Luis Fernando de; Barroso, Regina Cely; Pinto, Nivia Graciele Villela, E-mail: cely@uerj.b, E-mail: lfolive@uerj.b, E-mail: nitatag@gmail.co [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil). Inst. de Fisica; Cardoso, Simone Coutinho, E-mail: simone@if.ufrj.b [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Inst. de Fisica; Moreira, Silvana, E-mail: silvana@fec.unicamp.b [Universidade Estadual de Campinas (UNICAMP), SP (Brazil). Faculdade de Engenharia Civil, Arquitetura e Urbanismo

    2009-07-01

    The technique of X-ray fluorescence (XRF) using SR microbeams is a powerful analysis tool for studying elemental composition in several samples. One application of this technique is the analysis done through the mapping of chemical elements forming a matrix of data. The aim of this work is the presentation of the program MapXRF, an in-house software designed to optimize the processing and mapping of fluorescence intensities data. This program uses spectra generated by QXAS as input data and separates the intensities of each chemical element found in the fluorescence spectra in files themselves. From these files, the program generates the intensity maps that can be visualized in any program of treatment of images. The proposed software was tested using fluorescence data obtained in the XRF beamline of XRF at Synchrotron Light National Laboratory (LNLS), Brazil. Automatic 2D scans were performed and element distribution maps were obtained in the form of a matrix of data. (author)

  5. Sorting, Searching, and Simulation in the MapReduce Framework

    DEFF Research Database (Denmark)

    Goodrich, Michael T.; Sitchinava, Nodar; Zhang, Qin

    2011-01-01

    We study the MapReduce framework from an algorithmic standpoint, providing a generalization of the previous algorithmic models for MapReduce. We present optimal solutions for the fundamental problems of all-prefix-sums, sorting and multi-searching. Additionally, we design optimal simulations...

  6. Retrieval interval mapping, a tool to optimize the spectral retrieval range in differential optical absorption spectroscopy

    Science.gov (United States)

    Vogel, L.; Sihler, H.; Lampel, J.; Wagner, T.; Platt, U.

    2012-06-01

    Remote sensing via differential optical absorption spectroscopy (DOAS) has become a standard technique to identify and quantify trace gases in the atmosphere. The technique is applied in a variety of configurations, commonly classified into active and passive instruments using artificial and natural light sources, respectively. Platforms range from ground based to satellite instruments and trace-gases are studied in all kinds of different environments. Due to the wide range of measurement conditions, atmospheric compositions and instruments used, a specific challenge of a DOAS retrieval is to optimize the parameters for each specific case and particular trace gas of interest. This becomes especially important when measuring close to the detection limit. A well chosen evaluation wavelength range is crucial to the DOAS technique. It should encompass strong absorption bands of the trace gas of interest in order to maximize the sensitivity of the retrieval, while at the same time minimizing absorption structures of other trace gases and thus potential interferences. Also, instrumental limitations and wavelength depending sources of errors (e.g. insufficient corrections for the Ring effect and cross correlations between trace gas cross sections) need to be taken into account. Most often, not all of these requirements can be fulfilled simultaneously and a compromise needs to be found depending on the conditions at hand. Although for many trace gases the overall dependence of common DOAS retrieval on the evaluation wavelength interval is known, a systematic approach to find the optimal retrieval wavelength range and qualitative assessment is missing. Here we present a novel tool to determine the optimal evaluation wavelength range. It is based on mapping retrieved values in the retrieval wavelength space and thus visualize the consequence of different choices of retrieval spectral ranges, e.g. caused by slightly erroneous absorption cross sections, cross correlations and

  7. Optimizing sgRNA position markedly improves the efficiency of CRISPR/dCas9-mediated transcriptional repression

    DEFF Research Database (Denmark)

    Radzisheuskaya, Aliaksandra; Shlyueva, Daria; Müller, Iris

    2016-01-01

    CRISPR interference (CRISPRi) represents a newly developed tool for targeted gene repression. It has great application potential for studying gene function and mapping gene regulatory elements. However, the optimal parameters for efficient single guide RNA (sgRNA) design for CRISPRi are not fully...

  8. A methodology for optimal MSW management, with an application in the waste transportation of Attica Region, Greece

    International Nuclear Information System (INIS)

    Economopoulou, M.A.; Economopoulou, A.A.; Economopoulos, A.P.

    2013-01-01

    Highlights: • A two-step (strategic and detailed optimal planning) methodology is used for solving complex MSW management problems. • A software package is outlined, which can be used for generating detailed optimal plans. • Sensitivity analysis compares alternative scenarios that address objections and/or wishes of local communities. • A case study shows the application of the above procedure in practice and demonstrates the results and benefits obtained. - Abstract: The paper describes a software system capable of formulating alternative optimal Municipal Solid Wastes (MSWs) management plans, each of which meets a set of constraints that may reflect selected objections and/or wishes of local communities. The objective function to be minimized in each plan is the sum of the annualized capital investment and annual operating cost of all transportation, treatment and final disposal operations involved, taking into consideration the possible income from the sale of products and any other financial incentives or disincentives that may exist. For each plan formulated, the system generates several reports that define the plan, analyze its cost elements and yield an indicative profile of selected types of installations, as well as data files that facilitate the geographic representation of the optimal solution in maps through the use of GIS. A number of these reports compare the technical and economic data from all scenarios considered at the study area, municipality and installation level constituting in effect sensitivity analysis. The generation of alternative plans offers local authorities the opportunity of choice and the results of the sensitivity analysis allow them to choose wisely and with consensus. The paper presents also an application of this software system in the capital Region of Attica in Greece, for the purpose of developing an optimal waste transportation system in line with its approved waste management plan. The formulated plan was able to

  9. A methodology for optimal MSW management, with an application in the waste transportation of Attica Region, Greece

    Energy Technology Data Exchange (ETDEWEB)

    Economopoulou, M.A. [Hellenic Statistical Authority, Pireos 46 and Eponiton, Pireus 185 10 (Greece); Economopoulou, A.A. [Ministry of Environment, Energy and Climatic Change, 15 Amaliados Street, Athens 11523 (Greece); Economopoulos, A.P., E-mail: eco@otenet.gr [Environmental Engineering Dept., Technical University of Crete, Chania 73100 (Greece)

    2013-11-15

    Highlights: • A two-step (strategic and detailed optimal planning) methodology is used for solving complex MSW management problems. • A software package is outlined, which can be used for generating detailed optimal plans. • Sensitivity analysis compares alternative scenarios that address objections and/or wishes of local communities. • A case study shows the application of the above procedure in practice and demonstrates the results and benefits obtained. - Abstract: The paper describes a software system capable of formulating alternative optimal Municipal Solid Wastes (MSWs) management plans, each of which meets a set of constraints that may reflect selected objections and/or wishes of local communities. The objective function to be minimized in each plan is the sum of the annualized capital investment and annual operating cost of all transportation, treatment and final disposal operations involved, taking into consideration the possible income from the sale of products and any other financial incentives or disincentives that may exist. For each plan formulated, the system generates several reports that define the plan, analyze its cost elements and yield an indicative profile of selected types of installations, as well as data files that facilitate the geographic representation of the optimal solution in maps through the use of GIS. A number of these reports compare the technical and economic data from all scenarios considered at the study area, municipality and installation level constituting in effect sensitivity analysis. The generation of alternative plans offers local authorities the opportunity of choice and the results of the sensitivity analysis allow them to choose wisely and with consensus. The paper presents also an application of this software system in the capital Region of Attica in Greece, for the purpose of developing an optimal waste transportation system in line with its approved waste management plan. The formulated plan was able to

  10. A two-stage approach for multi-objective decision making with applications to system reliability optimization

    International Nuclear Information System (INIS)

    Li Zhaojun; Liao Haitao; Coit, David W.

    2009-01-01

    This paper proposes a two-stage approach for solving multi-objective system reliability optimization problems. In this approach, a Pareto optimal solution set is initially identified at the first stage by applying a multiple objective evolutionary algorithm (MOEA). Quite often there are a large number of Pareto optimal solutions, and it is difficult, if not impossible, to effectively choose the representative solutions for the overall problem. To overcome this challenge, an integrated multiple objective selection optimization (MOSO) method is utilized at the second stage. Specifically, a self-organizing map (SOM), with the capability of preserving the topology of the data, is applied first to classify those Pareto optimal solutions into several clusters with similar properties. Then, within each cluster, the data envelopment analysis (DEA) is performed, by comparing the relative efficiency of those solutions, to determine the final representative solutions for the overall problem. Through this sequential solution identification and pruning process, the final recommended solutions to the multi-objective system reliability optimization problem can be easily determined in a more systematic and meaningful way.

  11. Fractional order Darwinian particle swarm optimization applications and evaluation of an evolutionary algorithm

    CERN Document Server

    Couceiro, Micael

    2015-01-01

    This book examines the bottom-up applicability of swarm intelligence to solving multiple problems, such as curve fitting, image segmentation, and swarm robotics. It compares the capabilities of some of the better-known bio-inspired optimization approaches, especially Particle Swarm Optimization (PSO), Darwinian Particle Swarm Optimization (DPSO) and the recently proposed Fractional Order Darwinian Particle Swarm Optimization (FODPSO), and comprehensively discusses their advantages and disadvantages. Further, it demonstrates the superiority and key advantages of using the FODPSO algorithm, suc

  12. Regulation of Dynamical Systems to Optimal Solutions of Semidefinite Programs: Algorithms and Applications to AC Optimal Power Flow

    Energy Technology Data Exchange (ETDEWEB)

    Dall' Anese, Emiliano; Dhople, Sairaj V.; Giannakis, Georgios B.

    2015-07-01

    This paper considers a collection of networked nonlinear dynamical systems, and addresses the synthesis of feedback controllers that seek optimal operating points corresponding to the solution of pertinent network-wide optimization problems. Particular emphasis is placed on the solution of semidefinite programs (SDPs). The design of the feedback controller is grounded on a dual e-subgradient approach, with the dual iterates utilized to dynamically update the dynamical-system reference signals. Global convergence is guaranteed for diminishing stepsize rules, even when the reference inputs are updated at a faster rate than the dynamical-system settling time. The application of the proposed framework to the control of power-electronic inverters in AC distribution systems is discussed. The objective is to bridge the time-scale separation between real-time inverter control and network-wide optimization. Optimization objectives assume the form of SDP relaxations of prototypical AC optimal power flow problems.

  13. COMPARISON OF A FIXED-WING AND MULTI-ROTOR UAV FOR ENVIRONMENTAL MAPPING APPLICATIONS: A CASE STUDY

    Directory of Open Access Journals (Sweden)

    M. A. Boon

    2017-08-01

    Full Text Available The advent and evolution of Unmanned Aerial Vehicles (UAVs and photogrammetric techniques has provided the possibility for on-demand high-resolution environmental mapping. Orthoimages and three dimensional products such as Digital Surface Models (DSMs are derived from the UAV imagery which is amongst the most important spatial information tools for environmental planning. The two main types of UAVs in the commercial market are fixed-wing and multi-rotor. Both have their advantages and disadvantages including their suitability for certain applications. Fixed-wing UAVs normally have longer flight endurance capabilities while multi-rotors can provide for stable image capturing and easy vertical take-off and landing. Therefore, the objective of this study is to assess the performance of a fixed-wing versus a multi-rotor UAV for environmental mapping applications by conducting a specific case study. The aerial mapping of the Cors-Air model aircraft field which includes a wetland ecosystem was undertaken on the same day with a Skywalker fixed-wing UAV and a Raven X8 multi-rotor UAV equipped with similar sensor specifications (digital RGB camera under the same weather conditions. We compared the derived datasets by applying the DTMs for basic environmental mapping purposes such as slope and contour mapping including utilising the orthoimages for identification of anthropogenic disturbances. The ground spatial resolution obtained was slightly higher for the multi-rotor probably due to a slower flight speed and more images. The results in terms of the overall precision of the data was noticeably less accurate for the fixed-wing. In contrast, orthoimages derived from the two systems showed small variations. The multi-rotor imagery provided better representation of vegetation although the fixed-wing data was sufficient for the identification of environmental factors such as anthropogenic disturbances. Differences were observed utilising the respective DTMs

  14. Comparison of a Fixed-Wing and Multi-Rotor Uav for Environmental Mapping Applications: a Case Study

    Science.gov (United States)

    Boon, M. A.; Drijfhout, A. P.; Tesfamichael, S.

    2017-08-01

    The advent and evolution of Unmanned Aerial Vehicles (UAVs) and photogrammetric techniques has provided the possibility for on-demand high-resolution environmental mapping. Orthoimages and three dimensional products such as Digital Surface Models (DSMs) are derived from the UAV imagery which is amongst the most important spatial information tools for environmental planning. The two main types of UAVs in the commercial market are fixed-wing and multi-rotor. Both have their advantages and disadvantages including their suitability for certain applications. Fixed-wing UAVs normally have longer flight endurance capabilities while multi-rotors can provide for stable image capturing and easy vertical take-off and landing. Therefore, the objective of this study is to assess the performance of a fixed-wing versus a multi-rotor UAV for environmental mapping applications by conducting a specific case study. The aerial mapping of the Cors-Air model aircraft field which includes a wetland ecosystem was undertaken on the same day with a Skywalker fixed-wing UAV and a Raven X8 multi-rotor UAV equipped with similar sensor specifications (digital RGB camera) under the same weather conditions. We compared the derived datasets by applying the DTMs for basic environmental mapping purposes such as slope and contour mapping including utilising the orthoimages for identification of anthropogenic disturbances. The ground spatial resolution obtained was slightly higher for the multi-rotor probably due to a slower flight speed and more images. The results in terms of the overall precision of the data was noticeably less accurate for the fixed-wing. In contrast, orthoimages derived from the two systems showed small variations. The multi-rotor imagery provided better representation of vegetation although the fixed-wing data was sufficient for the identification of environmental factors such as anthropogenic disturbances. Differences were observed utilising the respective DTMs for the mapping

  15. QoS oriented MapReduce Optimization for Hadoop Based BigData Application

    OpenAIRE

    Burhan Ul Islam Khan; Rashidah F. Olanrewaju

    2014-01-01

    International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computa...

  16. Optimal power flow application issues in the Pool paradigm

    International Nuclear Information System (INIS)

    Gross, G.; Bompard, E.

    2004-01-01

    This paper focuses on the application of the Optimal Power Flow (OPF) to competitive markets. Since the OPF is a central decision-making tool its application to the more decentralized decision-making in the competitive electricity markets requires considerable care. There are some intrinsic challenges associated with the effective OPF application in the competitive environment due to the inherent characteristics of the OPF formulation. Two such characteristics are the flatness of the optimum surface and the consequent continuum associated with the optimum. In addition to these OPF structural characteristics, the level of authority vested in the central decision-making entity has major ramifications. These factors have wide ranging economic impacts, whose implications are very pronounced due to the fact that, unlike in the old vertically integrated utility environment, various market players are affected differently. The effects include price volatility, financial health of various players and the integrity of the market itself. We apply appropriate metrics to evaluate market efficiency and how the various players fare. We study the impacts of OPF applications in the Pool paradigm, with both supply and demand side explicitly modeled, and provide extensive numerical results on systems based on IEEE 30-bus and 118-bus networks. The results show the variability of nodal prices and the skew possible in different 'optimal' allocations among competing suppliers. Such variability in the results may lead to serious disputes among the players and the central decision-making authority. Directions for future research are discussed. (author)

  17. Cognitive maps and attention.

    Science.gov (United States)

    Hardt, Oliver; Nadel, Lynn

    2009-01-01

    Cognitive map theory suggested that exploring an environment and attending to a stimulus should lead to its integration into an allocentric environmental representation. We here report that directed attention in the form of exploration serves to gather information needed to determine an optimal spatial strategy, given task demands and characteristics of the environment. Attended environmental features may integrate into spatial representations if they meet the requirements of the optimal spatial strategy: when learning involves a cognitive mapping strategy, cues with high codability (e.g., concrete objects) will be incorporated into a map, but cues with low codability (e.g., abstract paintings) will not. However, instructions encouraging map learning can lead to the incorporation of cues with low codability. On the other hand, if spatial learning is not map-based, abstract cues can and will be used to encode locations. Since exploration appears to determine what strategy to apply and whether or not to encode a cue, recognition memory for environmental features is independent of whether or not a cue is part of a spatial representation. In fact, when abstract cues were used in a way that was not map-based, or when they were not used for spatial navigation at all, they were nevertheless recognized as familiar. Thus, the relation between exploratory activity on the one hand and spatial strategy and memory on the other appears more complex than initially suggested by cognitive map theory.

  18. An Interactive Immersive Serious Game Application for Kunyu Quantu World Map

    Science.gov (United States)

    Peng, S.-T.; Hsu, S.-Y.; Hsieh, K.-C.

    2015-08-01

    In recent years, more and more digital technologies and innovative concepts are applied on museum education. One of the concepts applied is "Serious game." Serious game is not designed for entertainment purpose but allows users to learn real world's cultural and educational knowledge in the virtual world through game-experiencing. Technologies applied on serious game are identical to those applied on entertainment game. Nowadays, the interactive technology applications considering users' movement and gestures in physical spaces are developing rapidly, which are extensively used in entertainment games, such as Kinect-based games. The ability to explore space via Kinect-based games can be incorporated into the design of serious game. The ancient world map, Kunyu Quantu, from the collection of the National Palace Museum is therefore applied in serious game development. In general, the ancient world map does not only provide geological information, but also contains museum knowledge. This particular ancient world map is an excellent content applied in games as teaching material. In the 17th century, it was first used by a missionary as a medium to teach the Kangxi Emperor of the latest geologic and scientific spirits from the West. On this map, it also includes written biological knowledge and climate knowledge. Therefore, this research aims to present the design of the interactive and immersive serious game based installation that developed from the rich content of the Kunyu Quantu World Map, and to analyse visitor's experience in terms of real world's cultural knowledge learning and interactive responses.

  19. Boundary maps for $C^*$-crossed products with R with an application to the quantum Hall effect

    OpenAIRE

    Kellendonk, Johannes; Schulz-Baldes, Hermann

    2004-01-01

    The boundary map in K-theory arising from the Wiener-Hopf extension of a crossed product algebra with R is the Connes-Thom isomorphism. In this article the Wiener Hopf extension is combined with the Heisenberg group algebra to provide an elementary construction of a corresponding map on higher traces (and cyclic cohomology). It then follows directly from a non-commutative Stokes theorem that this map is dual w.r.t.Connes' pairing of cyclic cohomology with K-theory. As an application, we prove...

  20. Second-generation speed limit map updating applications

    DEFF Research Database (Denmark)

    Tradisauskas, Nerius; Agerholm, Niels; Juhl, Jens

    2011-01-01

    Intelligent Speed Adaptation is an Intelligent Transport System developed to significantly improve road safety in helping car drivers maintain appropriate driving behaviour. The system works in connection with the speed limits on the road network. It is thus essential to keep the speed limit map...... used in the Intelligent Speed Adaptation scheme updated. The traditional method of updating speed limit maps on the basis of long time interval observations needed to be replaced by a more efficient speed limit updating tool. In a Danish Intelligent Speed Adaptation trial a web-based tool was therefore...... for map updating should preferably be made on the basis of a commercial map provider, 2 such as Google Maps and that the real challenge is to oblige road authorities to carry out updates....

  1. First application of quantum annealing to IMRT beamlet intensity optimization

    International Nuclear Information System (INIS)

    Nazareth, Daryl P; Spaans, Jason D

    2015-01-01

    Optimization methods are critical to radiation therapy. A new technology, quantum annealing (QA), employs novel hardware and software techniques to address various discrete optimization problems in many fields. We report on the first application of quantum annealing to the process of beamlet intensity optimization for IMRT.We apply recently-developed hardware which natively exploits quantum mechanical effects for improved optimization. The new algorithm, called QA, is most similar to simulated annealing, but relies on natural processes to directly minimize a system’s free energy. A simple quantum system is slowly evolved into a classical system representing the objective function. If the evolution is sufficiently slow, there are probabilistic guarantees that a global minimum will be located.To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitations. The beamlet dose matrices were computed using CERR and an objective function was defined based on typical clinical constraints, including dose-volume objectives, which result in a complex non-convex search space. The objective function was discretized and the QA method was compared to two standard optimization methods, simulated annealing and Tabu search, run on a conventional computing cluster.Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the simulated annealing (SA) method. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods.In this first application of hardware-enabled QA to IMRT optimization, its performance is comparable to Tabu search, but less effective than the SA in terms of final objective function values. However, its speed was 3–4 times faster than the other two methods

  2. Big Data Components for Business Process Optimization

    Directory of Open Access Journals (Sweden)

    Mircea Raducu TRIFU

    2016-01-01

    Full Text Available In these days, more and more people talk about Big Data, Hadoop, noSQL and so on, but very few technical people have the necessary expertise and knowledge to work with those concepts and technologies. The present issue explains one of the concept that stand behind two of those keywords, and this is the map reduce concept. MapReduce model is the one that makes the Big Data and Hadoop so powerful, fast, and diverse for business process optimization. MapReduce is a programming model with an implementation built to process and generate large data sets. In addition, it is presented the benefits of integrating Hadoop in the context of Business Intelligence and Data Warehousing applications. The concepts and technologies behind big data let organizations to reach a variety of objectives. Like other new information technologies, the main important objective of big data technology is to bring dramatic cost reduction.

  3. Application of Topic Map on Knowledge Organization

    Directory of Open Access Journals (Sweden)

    Sou-shan Wu

    2003-06-01

    Full Text Available Knowledge management (KM has received much attention from both academics and practitioners in the past few years. Following the KM trend, many organizations have built their own knowledge repositories or data warehouses. However, information or knowledge is still scattered everywhere without being properly managed. The rapid growth of the Internet accelerates the creation of unstructured and unclassified information and causes the explosion of information overload. The effort of browsing information through general-purpose search engines turns out to be tedious and painstaking. Hence, an effective technology to solve this information retrieval problem is much needed. The purpose of this research is to explore the application of text mining technique in organizing knowledge stored in unstructured natural language text documents. Major components of text mining techniques required for topic map in particular will be presented in detail.Two sets of unstructured documents are utilized to demonstrate the usage of SOM for topic categorization. The first set of documents is a collection of speeches given by Y.C. Wang, Chairman of the Taiwan Plastics Group, and the other is the collection of all laws and regulations related to securities and future markets in Taiwan. We also try to apply text mining to these two sets of documents to generate their respective topic maps, thus revealing the differences between organizing explicit and tacit knowledge as well as the difficulties associated with tacit knowledge.[Article content in Chinese

  4. The Crisis Map of the Czech Republic: the nationwide deployment of an Ushahidi application for disasters.

    Science.gov (United States)

    Pánek, Jiří; Marek, Lukáš; Pászto, Vít; Valůch, Jaroslav

    2017-10-01

    Crisis mapping is a legitimate component of both crisis informatics and disaster risk management. It has become an effective tool for humanitarian workers, especially after the earthquake in Haiti in 2010. Ushahidi is among the many mapping platforms on offer in the growing field of crisis mapping, and involves the application of crowdsourcing to create online and interactive maps of areas in turmoil. This paper presents the Crisis Map of the Czech Republic, which is the first such instrument to be deployed nationwide in Central Europe. It describes the methodologies used in the preparatory work phase and details some practices identified during the creation and actual employment of the map. In addition, the paper assesses its structure and technological architecture, as well as its potential possible development in the future. Lastly, it evaluates the utilisation of the Crisis Map during the floods in the Czech Republic in 2013. © 2017 The Author(s). Disasters © Overseas Development Institute, 2017.

  5. An engineering optimization method with application to STOL-aircraft approach and landing trajectories

    Science.gov (United States)

    Jacob, H. G.

    1972-01-01

    An optimization method has been developed that computes the optimal open loop inputs for a dynamical system by observing only its output. The method reduces to static optimization by expressing the inputs as series of functions with parameters to be optimized. Since the method is not concerned with the details of the dynamical system to be optimized, it works for both linear and nonlinear systems. The method and the application to optimizing longitudinal landing paths for a STOL aircraft with an augmented wing are discussed. Noise, fuel, time, and path deviation minimizations are considered with and without angle of attack, acceleration excursion, flight path, endpoint, and other constraints.

  6. ON THE QUESTION OF THE CONSTRUCTION OF COGNITIVE MAPS FOR DATA MINING

    Directory of Open Access Journals (Sweden)

    Zhilov R. A.

    2016-11-01

    Full Text Available A method of constructing an optimal cognitive maps consists in optimizing the input data and the dimension data structure of a cognitive map. Pro-optimization problem occurs when large amounts of input data. Optimization of time-dimension data is clustering the input data and as a method of polarization-clusters using hierarchical agglomerative method. Cluster analysis allows to divide the data set into a finite number of homogeneous groups. Optimization of the structurery cognitive map is automatically tuning the balance of influence on each other concepts of machine learning methods, particularly the method of training the neural network.

  7. Control and optimal control theories with applications

    CERN Document Server

    Burghes, D N

    2004-01-01

    This sound introduction to classical and modern control theory concentrates on fundamental concepts. Employing the minimum of mathematical elaboration, it investigates the many applications of control theory to varied and important present-day problems, e.g. economic growth, resource depletion, disease epidemics, exploited population, and rocket trajectories. An original feature is the amount of space devoted to the important and fascinating subject of optimal control. The work is divided into two parts. Part one deals with the control of linear time-continuous systems, using both transfer fun

  8. Maps in video games – range of applications

    Directory of Open Access Journals (Sweden)

    Chądzyńska Dominika

    2015-09-01

    Full Text Available The authors discuss the role of the map in various game genres, specifically video games. Presented examples illustrate widespread map usage in various ways and forms by the authors of games, both classic and video. The article takes a closer look at the classification and development of video games within the last few decades. Presently, video games use advanced geospatial models and data resources. Users are keen on a detailed representation of the real world. Game authors use advanced visualization technologies, which often are innovative and very attractive. Joint efforts of cartographers, geo-information specialists and game producers can bring interesting effects in the future. Although games are mainly made for entertainment, they are more frequently used for other purposes. There is a growing need for data reliability as well as for some effective means of transmission cartographic content. This opens up a new area of both scientific and implementation activity for cartographers. There is no universally accessible data on the role of cartographers in game production, but apparently it is quite limited at the moment. However, a wider application of cartographic methodology would have a positive effect on the development of games and, conversely, methods and technologies applied by game makers can influence the development of cartography.

  9. Personalized 2D color maps

    KAUST Repository

    Waldin, Nicholas; Bernhard, Matthias; Rautek, Peter; Viola, Ivan

    2016-01-01

    . In this paper we present a novel method to measure a user's ability to distinguish colors of a two-dimensional color map on a given monitor. We show how to adapt the color map to the user and display to optimally compensate for the measured deficiencies

  10. iMAR: An Interactive Web-Based Application for Mapping Herbicide Resistant Weeds.

    Directory of Open Access Journals (Sweden)

    Silvia Panozzo

    Full Text Available Herbicides are the major weed control tool in most cropping systems worldwide. However, the high reliance on herbicides has led to environmental issues as well as to the evolution of herbicide-resistant biotypes. Resistance is a major concern in modern agriculture and early detection of resistant biotypes is therefore crucial for its management and prevention. In this context, a timely update of resistance biotypes distribution is fundamental to devise and implement efficient resistance management strategies. Here we present an innovative web-based application called iMAR (interactive MApping of Resistance for the mapping of herbicide resistant biotypes. It is based on open source software tools and translates into maps the data reported in the GIRE (Italian herbicide resistance working group database of herbicide resistance at national level. iMAR allows an automatic, easy and cost-effective updating of the maps a nd provides two different systems, "static" and "dynamic". In the first one, the user choices are guided by a hierarchical tree menu, whereas the latter is more flexible and includes a multiple choice criteria (type of resistance, weed species, region, cropping systems that permits customized maps to be created. The generated information can be useful to various stakeholders who are involved in weed resistance management: farmers, advisors, national and local decision makers as well as the agrochemical industry. iMAR is freely available, and the system has the potential to handle large datasets and to be used for other purposes with geographical implications, such as the mapping of invasive plants or pests.

  11. Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM

    OpenAIRE

    Park, Chanoh; Moghadam, Peyman; Kim, Soohwan; Elfes, Alberto; Fookes, Clinton; Sridharan, Sridha

    2017-01-01

    The concept of continuous-time trajectory representation has brought increased accuracy and efficiency to multi-modal sensor fusion in modern SLAM. However, regardless of these advantages, its offline property caused by the requirement of global batch optimization is critically hindering its relevance for real-time and life-long applications. In this paper, we present a dense map-centric SLAM method based on a continuous-time trajectory to cope with this problem. The proposed system locally f...

  12. Optimization of pump parameters for gain flattened Raman fiber amplifiers based on artificial fish school algorithm

    Science.gov (United States)

    Jiang, Hai Ming; Xie, Kang; Wang, Ya Fei

    2011-11-01

    In this work, a novel metaheuristic named artificial fish school algorithm is introduced into the optimization of pump parameters for the design of gain flattened Raman fiber amplifiers for the first time. Artificial fish school algorithm emulates three simple social behaviors of a fish in a school, namely, preying, swarming and following, to optimize a target function. In this algorithm the pump wavelengths and power levels are mapped respectively to the state of a fish in a school, and the gain of a Raman fiber amplifier is mapped to the concentration of a food source for the fish school to search. Application of this algorithm to the design of a C-band gain flattened Raman fiber amplifier leads to an optimized amplifier that produces a flat gain spectrum with 0.63 dB in band ripple for given conditions. This result demonstrates that the artificial fish school algorithm is efficient for the optimization of pump parameters of gain flattened Raman fiber amplifiers.

  13. Mapping Iterative Medical Imaging Algorithm on Cell Accelerator

    Directory of Open Access Journals (Sweden)

    Meilian Xu

    2011-01-01

    architectures that exploit data parallel applications, medical imaging algorithms such as OS-SART can be studied to produce increased performance. In this paper, we map OS-SART on cell broadband engine (Cell BE. We effectively use the architectural features of Cell BE to provide an efficient mapping. The Cell BE consists of one powerPC processor element (PPE and eight SIMD coprocessors known as synergetic processor elements (SPEs. The limited memory storage on each of the SPEs makes the mapping challenging. Therefore, we present optimization techniques to efficiently map the algorithm on the Cell BE for improved performance over CPU version. We compare the performance of our proposed algorithm on Cell BE to that of Sun Fire ×4600, a shared memory machine. The Cell BE is five times faster than AMD Opteron dual-core processor. The speedup of the algorithm on Cell BE increases with the increase in the number of SPEs. We also experiment with various parameters, such as number of subsets, number of processing elements, and number of DMA transfers between main memory and local memory, that impact the performance of the algorithm.

  14. Global Optimization using Interval Analysis : Interval Optimization for Aerospace Applications

    NARCIS (Netherlands)

    Van Kampen, E.

    2010-01-01

    Optimization is an important element in aerospace related research. It is encountered for example in trajectory optimization problems, such as: satellite formation flying, spacecraft re-entry optimization and airport approach and departure optimization; in control optimization, for example in

  15. THE APPLICATION OF DIGITAL LINE GRAPHS AND MAP IN THE NETWORK ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    X. Guo

    2012-07-01

    Full Text Available WebGIS is an important research field in GIS. W3C organization established SVG standard, which laid a foundation for WebGIS based on vector data. In China, Digital Line Graphs(DLG is a significant GIS product and it has been used in many medium and large WebGIS system. Geographic information-portrayal is the common method of DLG visualization. However, the inherent characteristics of Geographic information-portrayal may lead to a relatively higher data production input, still, the visualization effect is not ideal. We put forward a new product named Digital Line Graphs and Map(DLGM, which consists of DLG and DLG's cartographic presentation data. It provides visualization data based on the cartographic standards. Due to the manufacture and management of DLGM data that are independent from software and platform, its data can be used in many fields. Network application is one of them. This paper is to use DLGM in the network applications. First it reveals the connotation and characteristics of DLGM then analyses the model that DLGM organizes, manages DLG and map symbol data. After that, combined with SVG standards, we put forward DLGM’s SVG encoding method without any information loss. Finally we provide a web map system based on local area network by using 1:10000 DLGM data of a certain area. Based on this study, we conclude that DLGM can be used in the network environment providing high quality DLG and cartographic data for WebGIS.

  16. Aircraft route planning based on digital map pre-treatment

    Directory of Open Access Journals (Sweden)

    Ran ZHEN

    2015-04-01

    Full Text Available Aiming at the flight path project in low complicated airspace, the influence of terrain conditions and surface threatening to aircraft flight are studied. Through the analysis of digital map and static threat, the paper explores the processing method of the digital map, and uses the Hermite function to process the map smoothly, reducing the searching range of optimal trajectory. By designing the terrain following, terrain avoidance and the way of avoiding a threat, the safety of aircraft can be guaranteed. In-depth analysis of particle swarm optimization (PSO algorithm realizes the three dimensional paths project before the aircraft performs a task. Through simulation, the difference of the maps before and after processing is shown, and offline programming of the three dimensional optimal path is achieved.

  17. Fixed point theorems for mappings satisfying contractive conditions of integral type and applications

    Directory of Open Access Journals (Sweden)

    Kang Shin

    2011-01-01

    Full Text Available Abstract In this paper, the existence, uniqueness and iterative approximations of fixed points for contractive mappings of integral type in complete metric spaces are established. As applications, the existence, uniqueness and iterative approximations of solutions for a class of functional equations arising in dynamic programming are discussed. The results presented in this paper extend and improve essentially the results of Branciari (A fixed point theorem for mappings satisfying a general contractive condition of integral type. Int. J. Math. Math. Sci. 29, 531-536, 2002, Kannan (Some results on fixed points. Bull. Calcutta Math. Soc. 60, 71-76, 1968 and several known results. Four concrete examples involving the contractive mappings of integral type with uncountably many points are constructed. 2010 Mathematics Subject Classfication: 54H25, 47H10, 49L20, 49L99, 90C39

  18. ePRO-MP: A Tool for Profiling and Optimizing Energy and Performance of Mobile Multiprocessor Applications

    Directory of Open Access Journals (Sweden)

    Wonil Choi

    2009-01-01

    Full Text Available For mobile multiprocessor applications, achieving high performance with low energy consumption is a challenging task. In order to help programmers to meet these design requirements, system development tools play an important role. In this paper, we describe one such development tool, ePRO-MP, which profiles and optimizes both performance and energy consumption of multi-threaded applications running on top of Linux for ARM11 MPCore-based embedded systems. One of the key features of ePRO-MP is that it can accurately estimate the energy consumption of multi-threaded applications without requiring a power measurement equipment, using a regression-based energy model. We also describe another key benefit of ePRO-MP, an automatic optimization function, using two example problems. Using the automatic optimization function, ePRO-MP can achieve high performance and low power consumption without programmer intervention. Our experimental results show that ePRO-MP can improve the performance and energy consumption by 6.1% and 4.1%, respectively, over a baseline version for the co-running applications optimization example. For the producer-consumer application optimization example, ePRO-MP improves the performance and energy consumption by 60.5% and 43.3%, respectively over a baseline version.

  19. Development of a reconstruction software of elemental maps by micro X-ray fluorescence

    International Nuclear Information System (INIS)

    Almeida, Andre Pereira de; Braz, Delson; Mota, Carla Lemos; Oliveira, Luis Fernando de; Barroso, Regina Cely; Pinto, Nivia Graciele Villela; Cardoso, Simone Coutinho; Moreira, Silvana

    2009-01-01

    The technique of X-ray fluorescence (XRF) using SR microbeams is a powerful analysis tool for studying elemental composition in several samples. One application of this technique is the analysis done through the mapping of chemical elements forming a matrix of data. The aim of this work is the presentation of the program MapXRF, an in-house software designed to optimize the processing and mapping of fluorescence intensities data. This program uses spectra generated by QXAS as input data and separates the intensities of each chemical element found in the fluorescence spectra in files themselves. From these files, the program generates the intensity maps that can be visualized in any program of treatment of images. The proposed software was tested using fluorescence data obtained in the XRF beamline at National Synchrotron Light Laboratory (LNLS), Brazil. Automatic 2D scans were performed and element distribution maps were obtained in form of a matrix of data. (author)

  20. Development of a reconstruction software of elemental maps by micro X-ray fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Andre Pereira de; Braz, Delson; Mota, Carla Lemos, E-mail: apalmeid@gmail.co, E-mail: delson@lin.ufrj.b, E-mail: clemos@con.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Energia Nuclear; Oliveira, Luis Fernando de; Barroso, Regina Cely; Pinto, Nivia Graciele Villela, E-mail: cely@uerj.b, E-mail: lfolive@uerj.b, E-mail: nitatag@gmail.co [Universidade do Estado do Rio de Janeiro (IF/UERJ), RJ (Brazil). Inst. de Fisica; Cardoso, Simone Coutinho [Universidade Federal do Rio de Janeiro (IF/UFRJ), RJ (Brazil). Inst. de Fisica; Moreira, Silvana [Universidade Estadual de Campinas (FEC/UNICAMP), SP (Brazil) Faculdade de Engenharia Civil, Arquitetura e Urbanismo

    2009-07-01

    The technique of X-ray fluorescence (XRF) using SR microbeams is a powerful analysis tool for studying elemental composition in several samples. One application of this technique is the analysis done through the mapping of chemical elements forming a matrix of data. The aim of this work is the presentation of the program MapXRF, an in-house software designed to optimize the processing and mapping of fluorescence intensities data. This program uses spectra generated by QXAS as input data and separates the intensities of each chemical element found in the fluorescence spectra in files themselves. From these files, the program generates the intensity maps that can be visualized in any program of treatment of images. The proposed software was tested using fluorescence data obtained in the XRF beamline at National Synchrotron Light Laboratory (LNLS), Brazil. Automatic 2D scans were performed and element distribution maps were obtained in form of a matrix of data. (author)

  1. A MAP MASH-UP APPLICATION: INVESTIGATION THE TEMPORAL EFFECTS OF CLIMATE CHANGE ON SALT LAKE BASIN

    Directory of Open Access Journals (Sweden)

    O. S. Kirtiloglu

    2016-06-01

    Full Text Available The main purpose of this paper is to investigate climate change effects that have been occurred at the beginning of the twenty-first century at the Konya Closed Basin (KCB located in the semi-arid central Anatolian region of Turkey and particularly in Salt Lake region where many major wetlands located in and situated in KCB and to share the analysis results online in a Web Geographical Information System (GIS environment. 71 Landsat 5-TM, 7-ETM+ and 8-OLI images and meteorological data obtained from 10 meteorological stations have been used at the scope of this work. 56 of Landsat images have been used for extraction of Salt Lake surface area through multi-temporal Landsat imagery collected from 2000 to 2014 in Salt lake basin. 15 of Landsat images have been used to make thematic maps of Normalised Difference Vegetation Index (NDVI in KCB, and 10 meteorological stations data has been used to generate the Standardized Precipitation Index (SPI, which was used in drought studies. For the purpose of visualizing and sharing the results, a Web GIS-like environment has been established by using Google Maps and its useful data storage and manipulating product Fusion Tables which are all Google’s free of charge Web service elements. The infrastructure of web application includes HTML5, CSS3, JavaScript, Google Maps API V3 and Google Fusion Tables API technologies. These technologies make it possible to make effective “Map Mash-Ups” involving an embedded Google Map in a Web page, storing the spatial or tabular data in Fusion Tables and add this data as a map layer on embedded map. The analysing process and map mash-up application have been discussed in detail as the main sections of this paper.

  2. a Map Mash-Up Application: Investigation the Temporal Effects of Climate Change on Salt Lake Basin

    Science.gov (United States)

    Kirtiloglu, O. S.; Orhan, O.; Ekercin, S.

    2016-06-01

    The main purpose of this paper is to investigate climate change effects that have been occurred at the beginning of the twenty-first century at the Konya Closed Basin (KCB) located in the semi-arid central Anatolian region of Turkey and particularly in Salt Lake region where many major wetlands located in and situated in KCB and to share the analysis results online in a Web Geographical Information System (GIS) environment. 71 Landsat 5-TM, 7-ETM+ and 8-OLI images and meteorological data obtained from 10 meteorological stations have been used at the scope of this work. 56 of Landsat images have been used for extraction of Salt Lake surface area through multi-temporal Landsat imagery collected from 2000 to 2014 in Salt lake basin. 15 of Landsat images have been used to make thematic maps of Normalised Difference Vegetation Index (NDVI) in KCB, and 10 meteorological stations data has been used to generate the Standardized Precipitation Index (SPI), which was used in drought studies. For the purpose of visualizing and sharing the results, a Web GIS-like environment has been established by using Google Maps and its useful data storage and manipulating product Fusion Tables which are all Google's free of charge Web service elements. The infrastructure of web application includes HTML5, CSS3, JavaScript, Google Maps API V3 and Google Fusion Tables API technologies. These technologies make it possible to make effective "Map Mash-Ups" involving an embedded Google Map in a Web page, storing the spatial or tabular data in Fusion Tables and add this data as a map layer on embedded map. The analysing process and map mash-up application have been discussed in detail as the main sections of this paper.

  3. A novel linear programming approach to fluence map optimization for intensity modulated radiation therapy treatment planning

    International Nuclear Information System (INIS)

    Romeijn, H Edwin; Ahuja, Ravindra K; Dempsey, James F; Kumar, Arvind; Li, Jonathan G

    2003-01-01

    We present a novel linear programming (LP) based approach for efficiently solving the intensity modulated radiation therapy (IMRT) fluence-map optimization (FMO) problem to global optimality. Our model overcomes the apparent limitations of a linear-programming approach by approximating any convex objective function by a piecewise linear convex function. This approach allows us to retain the flexibility offered by general convex objective functions, while allowing us to formulate the FMO problem as a LP problem. In addition, a novel type of partial-volume constraint that bounds the tail averages of the differential dose-volume histograms of structures is imposed while retaining linearity as an alternative approach to improve dose homogeneity in the target volumes, and to attempt to spare as many critical structures as possible. The goal of this work is to develop a very rapid global optimization approach that finds high quality dose distributions. Implementation of this model has demonstrated excellent results. We found globally optimal solutions for eight 7-beam head-and-neck cases in less than 3 min of computational time on a single processor personal computer without the use of partial-volume constraints. Adding such constraints increased the running times by a factor of 2-3, but improved the sparing of critical structures. All cases demonstrated excellent target coverage (>95%), target homogeneity (<10% overdosing and <7% underdosing) and organ sparing using at least one of the two models

  4. Calibration and Industrial Application of Instrument for Surface Mapping based on AFM

    DEFF Research Database (Denmark)

    Hansen, Hans Nørgaard; Kofod, Niels; De Chiffre, Leonardo

    2002-01-01

    The paper describes the calibration and application of an integrated system for topographic characterisation of fine surfaces on large workpieces. The system, consisting of an atomic force microscope mounted on a coordinate measuring machine, was especially designed for surface mapping, i.e., mea...... consisting of a steel sphere with a polished surface having 3 nm roughness....

  5. Random projections and the optimization of an algorithm for phase retrieval

    International Nuclear Information System (INIS)

    Elser, Veit

    2003-01-01

    Iterative phase retrieval algorithms typically employ projections onto constraint subspaces to recover the unknown phases in the Fourier transform of an image, or, in the case of x-ray crystallography, the electron density of a molecule. For a general class of algorithms, where the basic iteration is specified by the difference map, solutions are associated with fixed points of the map, the attractive character of which determines the effectiveness of the algorithm. The behaviour of the difference map near fixed points is controlled by the relative orientation of the tangent spaces of the two constraint subspaces employed by the map. Since the dimensionalities involved are always large in practical applications, it is appropriate to use random matrix theory ideas to analyse the average-case convergence at fixed points. Optimal values of the γ parameters of the difference map are found which differ somewhat from the values previously obtained on the assumption of orthogonal tangent spaces

  6. Reliability-redundancy optimization by means of a chaotic differential evolution approach

    International Nuclear Information System (INIS)

    Coelho, Leandro dos Santos

    2009-01-01

    The reliability design is related to the performance analysis of many engineering systems. The reliability-redundancy optimization problems involve selection of components with multiple choices and redundancy levels that produce maximum benefits, can be subject to the cost, weight, and volume constraints. Classical mathematical methods have failed in handling nonconvexities and nonsmoothness in optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solution in reliability-redundancy optimization problems. Evolutionary algorithms (EAs) - paradigms of evolutionary computation field - are stochastic and robust meta-heuristics useful to solve reliability-redundancy optimization problems. EAs such as genetic algorithm, evolutionary programming, evolution strategies and differential evolution are being used to find global or near global optimal solution. A differential evolution approach based on chaotic sequences using Lozi's map for reliability-redundancy optimization problems is proposed in this paper. The proposed method has a fast convergence rate but also maintains the diversity of the population so as to escape from local optima. An application example in reliability-redundancy optimization based on the overspeed protection system of a gas turbine is given to show its usefulness and efficiency. Simulation results show that the application of deterministic chaotic sequences instead of random sequences is a possible strategy to improve the performance of differential evolution.

  7. A Novel Distributed Quantum-Behaved Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Yangyang Li

    2017-01-01

    Full Text Available Quantum-behaved particle swarm optimization (QPSO is an improved version of particle swarm optimization (PSO and has shown superior performance on many optimization problems. But for now, it may not always satisfy the situations. Nowadays, problems become larger and more complex, and most serial optimization algorithms cannot deal with the problem or need plenty of computing cost. Fortunately, as an effective model in dealing with problems with big data which need huge computation, MapReduce has been widely used in many areas. In this paper, we implement QPSO on MapReduce model and propose MapReduce quantum-behaved particle swarm optimization (MRQPSO which achieves parallel and distributed QPSO. Comparisons are made between MRQPSO and QPSO on some test problems and nonlinear equation systems. The results show that MRQPSO could complete computing task with less time. Meanwhile, from the view of optimization performance, MRQPSO outperforms QPSO in many cases.

  8. Design and Optimization of Tube Type Interior Permanent Magnets Generator for Free Piston Applications

    Directory of Open Access Journals (Sweden)

    Serdal ARSLAN

    2017-05-01

    Full Text Available In this study a design and optimization of a generator to be used in free piston applications was made. In order to supply required initial force, an IPM (interior permanent magnets cavity tube type linear generator was selected. By using analytical equations’ basic dimensioning of generator was made. By using Ansys-Maxwell dimensioning, analysis and optimization of the generator was realized. Also, the effects of design basic variables (pole step ratio, cavity step ratio, inner diameter - outer diameter ratio, primary final length, air interval on pinking force were examined by using parametric analyses. Among these variables, cavity step ratio, inner diameter - outer diameter ratio, primary final length were optimally determined by algorithm and sequential nonlinear programming. The two methods were compared in terms of pinking force calculation problem. Preliminary application of the linear generator was performed for free piston application.

  9. Application of Genetic Algorithm and Particle Swarm Optimization techniques for improved image steganography systems

    Directory of Open Access Journals (Sweden)

    Jude Hemanth Duraisamy

    2016-01-01

    Full Text Available Image steganography is one of the ever growing computational approaches which has found its application in many fields. The frequency domain techniques are highly preferred for image steganography applications. However, there are significant drawbacks associated with these techniques. In transform based approaches, the secret data is embedded in random manner in the transform coefficients of the cover image. These transform coefficients may not be optimal in terms of the stego image quality and embedding capacity. In this work, the application of Genetic Algorithm (GA and Particle Swarm Optimization (PSO have been explored in the context of determining the optimal coefficients in these transforms. Frequency domain transforms such as Bandelet Transform (BT and Finite Ridgelet Transform (FRIT are used in combination with GA and PSO to improve the efficiency of the image steganography system.

  10. The application of optimization of protection to regulation and operational practice

    International Nuclear Information System (INIS)

    Ilari, O.

    1989-01-01

    Optimization of protection and the problems of its practical application have been of concern for several years to the NEA Committee on Radiation Protection and Public Health. The present paper summarizes the principal conclusions of a meeting on this topic organized by the NEA in March 1988, with the participation of radiation protection, nuclear safety and radioactive waste management experts. From the results of the meeting it appears that there is now as increasingly solid background of knowledge and common understanding of the conceptual aspects of optimization of protection. However, its degree of implementation in the regulatory and operational practices is very uneven. The areas of plant design and operation appear the most promising in terms of examples of concrete application, whilst severe reservations exist in the nuclear safety community on the possibility of applying this approach to the prevention of nuclear accidents. There is also consensus on the fact that optimization of protection can only play a partial and minor role in decisions concerning the choice of radioactive waste disposal options

  11. Optimal perturbations for nonlinear systems using graph-based optimal transport

    Science.gov (United States)

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  12. A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Jiasen, E-mail: ma.jiasen@mayo.edu; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G. [Department of Radiation Oncology, Division of Medical Physics, Mayo Clinic, 200 First Street Southwest, Rochester, Minnesota 55905 (United States)

    2014-12-15

    Purpose: Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. Methods: An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. Results: For relatively large and complex three-field head and neck cases, i.e., >100 000 spots with a target volume of ∼1000 cm{sup 3} and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. Conclusions: A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45

  13. A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.

    Science.gov (United States)

    Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G

    2014-12-01

    Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and

  14. Automatically Annotated Mapping for Indoor Mobile Robot Applications

    DEFF Research Database (Denmark)

    Özkil, Ali Gürcan; Howard, Thomas J.

    2012-01-01

    This paper presents a new and practical method for mapping and annotating indoor environments for mobile robot use. The method makes use of 2D occupancy grid maps for metric representation, and topology maps to indicate the connectivity of the ‘places-of-interests’ in the environment. Novel use...... localization and mapping in topology space, and fuses camera and robot pose estimations to build an automatically annotated global topo-metric map. It is developed as a framework for a hospital service robot and tested in a real hospital. Experiments show that the method is capable of producing globally...... consistent, automatically annotated hybrid metric-topological maps that is needed by mobile service robots....

  15. A regularized, model-based approach to phase-based conductivity mapping using MRI.

    Science.gov (United States)

    Ropella, Kathleen M; Noll, Douglas C

    2017-11-01

    To develop a novel regularized, model-based approach to phase-based conductivity mapping that uses structural information to improve the accuracy of conductivity maps. The inverse of the three-dimensional Laplacian operator is used to model the relationship between measured phase maps and the object conductivity in a penalized weighted least-squares optimization problem. Spatial masks based on structural information are incorporated into the problem to preserve data near boundaries. The proposed Inverse Laplacian method was compared against a restricted Gaussian filter in simulation, phantom, and human experiments. The Inverse Laplacian method resulted in lower reconstruction bias and error due to noise in simulations than the Gaussian filter. The Inverse Laplacian method also produced conductivity maps closer to the measured values in a phantom and with reduced noise in the human brain, as compared to the Gaussian filter. The Inverse Laplacian method calculates conductivity maps with less noise and more accurate values near boundaries. Improving the accuracy of conductivity maps is integral for advancing the applications of conductivity mapping. Magn Reson Med 78:2011-2021, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  16. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    Science.gov (United States)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. These approaches are implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  17. Bi-Criteria Optimization of Decision Trees with Applications to Data Analysis

    KAUST Repository

    Chikalov, Igor

    2017-10-19

    This paper is devoted to the study of bi-criteria optimization problems for decision trees. We consider different cost functions such as depth, average depth, and number of nodes. We design algorithms that allow us to construct the set of Pareto optimal points (POPs) for a given decision table and the corresponding bi-criteria optimization problem. These algorithms are suitable for investigation of medium-sized decision tables. We discuss three examples of applications of the created tools: the study of relationships among depth, average depth and number of nodes for decision trees for corner point detection (such trees are used in computer vision for object tracking), study of systems of decision rules derived from decision trees, and comparison of different greedy algorithms for decision tree construction as single- and bi-criteria optimization algorithms.

  18. Two-Stage Chaos Optimization Search Application in Maximum Power Point Tracking of PV Array

    Directory of Open Access Journals (Sweden)

    Lihua Wang

    2014-01-01

    Full Text Available In order to deliver the maximum available power to the load under the condition of varying solar irradiation and environment temperature, maximum power point tracking (MPPT technologies have been used widely in PV systems. Among all the MPPT schemes, the chaos method is one of the hot topics in recent years. In this paper, a novel two-stage chaos optimization method is presented which can make search faster and more effective. In the process of proposed chaos search, the improved logistic mapping with the better ergodic is used as the first carrier process. After finding the current optimal solution in a certain guarantee, the power function carrier as the secondary carrier process is used to reduce the search space of optimized variables and eventually find the maximum power point. Comparing with the traditional chaos search method, the proposed method can track the change quickly and accurately and also has better optimization results. The proposed method provides a new efficient way to track the maximum power point of PV array.

  19. TH-A-19A-12: A GPU-Accelerated and Monte Carlo-Based Intensity Modulated Proton Therapy Optimization System

    Energy Technology Data Exchange (ETDEWEB)

    Ma, J; Wan Chan Tseung, H; Beltran, C [Mayo Clinic, Rochester, MN (United States)

    2014-06-15

    Purpose: To develop a clinically applicable intensity modulated proton therapy (IMPT) optimization system that utilizes more accurate Monte Carlo (MC) dose calculation, rather than analytical dose calculation. Methods: A very fast in-house graphics processing unit (GPU) based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified gradient based optimization method was used to achieve the desired dose volume histograms (DVH). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve the spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that Result from maintaining the intrinsic CT resolution and large number of proton spots. The dose effects were studied particularly in cases with heterogeneous materials in comparison with the commercial treatment planning system (TPS). Results: For a relatively large and complex three-field bi-lateral head and neck case (i.e. >100K spots with a target volume of ∼1000 cc and multiple surrounding critical structures), the optimization together with the initial MC dose influence map calculation can be done in a clinically viable time frame (i.e. less than 15 minutes) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The DVHs of the MC TPS plan compare favorably with those of a commercial treatment planning system. Conclusion: A GPU accelerated and MC-based IMPT optimization system was developed. The dose calculation and plan optimization can be performed in less than 15 minutes on a hardware system costing less than 45,000 dollars. The fast calculation and optimization makes the system easily expandable to robust and multi-criteria optimization. This work was funded in part by a grant from Varian Medical Systems, Inc.

  20. TH-A-19A-12: A GPU-Accelerated and Monte Carlo-Based Intensity Modulated Proton Therapy Optimization System

    International Nuclear Information System (INIS)

    Ma, J; Wan Chan Tseung, H; Beltran, C

    2014-01-01

    Purpose: To develop a clinically applicable intensity modulated proton therapy (IMPT) optimization system that utilizes more accurate Monte Carlo (MC) dose calculation, rather than analytical dose calculation. Methods: A very fast in-house graphics processing unit (GPU) based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified gradient based optimization method was used to achieve the desired dose volume histograms (DVH). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve the spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that Result from maintaining the intrinsic CT resolution and large number of proton spots. The dose effects were studied particularly in cases with heterogeneous materials in comparison with the commercial treatment planning system (TPS). Results: For a relatively large and complex three-field bi-lateral head and neck case (i.e. >100K spots with a target volume of ∼1000 cc and multiple surrounding critical structures), the optimization together with the initial MC dose influence map calculation can be done in a clinically viable time frame (i.e. less than 15 minutes) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The DVHs of the MC TPS plan compare favorably with those of a commercial treatment planning system. Conclusion: A GPU accelerated and MC-based IMPT optimization system was developed. The dose calculation and plan optimization can be performed in less than 15 minutes on a hardware system costing less than 45,000 dollars. The fast calculation and optimization makes the system easily expandable to robust and multi-criteria optimization. This work was funded in part by a grant from Varian Medical Systems, Inc

  1. The Arctic Observing Viewer: A Web-mapping Application for U.S. Arctic Observing Activities

    Science.gov (United States)

    Kassin, A.; Gaylord, A. G.; Manley, W. F.; Villarreal, S.; Tweedie, C. E.; Cody, R. P.; Copenhaver, W.; Dover, M.; Score, R.; Habermann, T.

    2014-12-01

    Although a great deal of progress has been made with various arctic observing efforts, it can be difficult to assess such progress when so many agencies, organizations, research groups and others are making such rapid progress. To help meet the strategic needs of the U.S. SEARCH-AON program and facilitate the development of SAON and related initiatives, the Arctic Observing Viewer (AOV; http://ArcticObservingViewer.org) has been developed. This web mapping application compiles detailed information pertaining to U.S. Arctic Observing efforts. Contributing partners include the U.S. NSF, USGS, ACADIS, ADIwg, AOOS, a2dc, AON, ARMAP, BAID, IASOA, INTERACT, and others. Over 6100 sites are currently in the AOV database and the application allows users to visualize, navigate, select, advance search, draw, print, and more. AOV is founded on principles of software and data interoperability and includes an emerging "Project" metadata standard, which uses ISO 19115-1 and compatible web services. In the last year, substantial efforts have focused on maintaining and centralizing all database information. In order to keep up with emerging technologies and demand for the application, the AOV data set has been structured and centralized within a relational database; furthermore, the application front-end has been ported to HTML5. Porting the application to HTML5 will now provide access to mobile users utilizing tablets and cell phone devices. Other application enhancements include an embedded Apache Solr search platform which provides users with the capability to perform advance searches throughout the AOV dataset, and an administration web based data management system which allows the administrators to add, update, and delete data in real time. We encourage all collaborators to use AOV tools and services for their own purposes and to help us extend the impact of our efforts and ensure AOV complements other cyber-resources. Reinforcing dispersed but interoperable resources in this

  2. Development of a scanning proton microprobe - computer-control, elemental mapping and applications

    International Nuclear Information System (INIS)

    Loevestam, Goeran.

    1989-08-01

    A scanning proton microprobe set-up has been developed at the Pelletron accelerator in Lund. A magnetic beam scanning system and a computer-control system for beam scanning and data aquisition is described. The computer system consists of a VMEbus front-end computer and a μVax-II host-computer, interfaced by means of a high-speed data link. The VMEbus computer controls data acquisition, beam charge monitoring and beam scanning while the more sophisticated work of elemental mapping and spectrum evaluations is left to the μVax-II. The calibration of the set-up is described as well as several applications. Elemental micro patterns in tree rings and bark has been investigated by means of elemental mapping and quantitative analysis. Large variations of elemental concentrations have been found for several elements within a single tree ring. An external beam set-up has been developed in addition to the proton microprobe set-up. The external beam has been used for the analysis of antique papyrus documents. Using a scanning sample procedure and particle induced X-ray emission (PIXE) analysis, damaged and missing characters of the text could be made visible by means of multivariate statistical data evaluation and elemental mapping. Also aspects of elemental mapping by means of scanning μPIXE analysis are discussed. Spectrum background, target thickness variations and pile-up are shown to influence the structure of elemental maps considerably. In addition, a semi-quantification procedure has been developed. (author)

  3. Meso(topoclimatic maps and mapping

    Directory of Open Access Journals (Sweden)

    Ladislav Plánka

    2007-06-01

    Full Text Available The atmospheric characteristics can be studied from many points of view, most often we talk about time and spatial standpoint. Application of time standpoint leads either to different kinds of the synoptic and prognostic maps production, which presents actual state of atmosphere in short time section in the past or in the near future or to the climatic maps production which presents longterm weather regime. Spatial standpoint then differs map works according to natural phenomenon proportions, whereas the scale of their graphic presentation can be different. It depends on production purpose of each work.In the paper there are analysed methods of mapping and climatic maps production, which display longterm regime of chosen atmospheric features. These athmosphere features are formed in interaction with land surface and also have direct influence on people and their activities throughout the country. At the same time they’re influenced by anthropogenic intervention to the landscape.

  4. Reflectance diffuse optical tomography. Its application to human brain mapping

    International Nuclear Information System (INIS)

    Ueda, Yukio; Yamanaka, Takeshi; Yamashita, Daisuke; Suzuki, Toshihiko; Ohmae, Etsuko; Oda, Motoki; Yamashita, Yutaka

    2005-01-01

    We report the successful application of reflectance diffuse optical tomography (DOT) using near-infrared light with the new reconstruction algorithm that we developed to the observation of regional hemodynamic changes in the brain under specific mental tasks. Our results reveal the heterogeneous distribution of oxyhemoglobin and deoxyhemoglobin in the brain, showing complementary images of oxyhemoglobin and deoxyhemoglobin changes in certain regions. We conclude that our reflectance DOT has practical potential for human brain mapping, as well as in the diagnostic imaging of brain diseases. (author)

  5. SiGe HBTs Optimization for Wireless Power Amplifier Applications

    Directory of Open Access Journals (Sweden)

    Pierre-Marie Mans

    2010-01-01

    Full Text Available This paper deals with SiGe HBTs optimization for power amplifier applications dedicated to wireless communications. In this work, we investigate the fT-BVCEO tradeoff by various collector optimization schemes such as epilayer thickness and dopant concentration, and SIC and CAP characteristics. Furthermore, a new trapezoidal base Germanium (Ge profile is proposed. Thanks to this profile, precise control of Ge content at the metallurgical emitter-base junction is obtained. Gain stability is obtained for a wide range of temperatures through tuning the emitter-base junction Ge percent. Finally, a comprehensive investigation of Ge introduction into the collector (backside Ge profile is conducted in order to improve the fT values at high injection levels.

  6. The Logistic Map and the Route to Chaos From The Beginnings to Modern Applications

    CERN Document Server

    Ausloos, Marcel

    2006-01-01

    Pierre-Francois Verhulst, with his seminal work using the logistic map to describe population growth and saturation, paved the way for the many applications of this tool in modern mathematics, physics, chemistry, biology, economics and sociology. Indeed nowadays the logistic map is considered a useful and paradigmatic showcase for the route leading to chaos. This volume gathers contributions from some of the leading specialists in the field to present a state-of-the art view of the many ramifications of the developments initiated by Verhulst over a century ago.

  7. Optimization of an X-ray diffraction imaging system for medical and security applications

    International Nuclear Information System (INIS)

    Marticke, Fanny

    2016-01-01

    X-ray diffraction imaging is a powerful noninvasive technique to identify or characterize different materials. Compared to traditional techniques using X-ray transmission, it allows to extract more material characteristic information, such as the Bragg peak positions for crystalline materials as well as the molecular form factor for amorphous materials. The potential of this technique has been recognized by many researchers and numerous applications such as luggage inspection, nondestructive testing, drug detection and biological tissue characterization have been proposed. The method of energy dispersive X-ray diffraction (EDXRD) is particularly suited for this type of applications as it allows the use of a conventional X-ray tube, the acquisition of the whole spectrum at the same time and parallelized architectures to inspect an entire object in a reasonable time. The purpose of the present work is to optimize the whole material characterization chain. Optimization comprises two aspects: optimization of the acquisition system and of data processing. The last one concerns especially the correction of diffraction pattern degraded by acquisition process. Reconstruction methods are proposed and validated on simulated and experimental spectra. System optimization is realized using figures of merit such as detective quantum efficiency (DQE), contrast to noise ratio (CNR) and receiver operating characteristic (ROC) curves.The first chosen application is XRD based breast imaging which aims to distinguish cancerous tissues from healthy tissues. Two non-multiplexed collimation configurations combining EDXRD and ADXRD are proposed after optimization procedure. A simulation study of the whole system and a breast phantom was realized to determine the required dose to detect a 4 mm carcinoma nodule. The second application concerns detection of illicit materials during security check. The possible benefit of a multiplexed collimation system was examined. (author) [fr

  8. The application of particle swarm optimization to identify gamma spectrum with neural network

    International Nuclear Information System (INIS)

    Shi Dongsheng; Di Yuming; Zhou Chunlin

    2006-01-01

    Aiming at the shortcomings that BP algorithm is usually trapped to a local optimum and it has a low speed of convergence in the application of neural network to identify gamma spectrum, according to the advantage of the globe optimal searching of particle swarm optimization, this paper put forward a new algorithm for neural network training by combining BP algorithm and Particle Swarm Optimization-mixed PSO-BP algorithm. In the application to identify gamma spectrum, the new algorithm overcomes the shortcoming that BP algorithm is usually trapped to a local optimum and the neural network trained by it has a high ability of generalization with identification result of one hundred percent correct. Practical example shows that the mixed PSO-BP algorithm can effectively and reliably be used to identify gamma spectrum. (authors)

  9. Simplified Occupancy Grid Indoor Mapping Optimized for Low-Cost Robots

    Directory of Open Access Journals (Sweden)

    Javier Garrido

    2013-10-01

    Full Text Available This paper presents a mapping system that is suitable for small mobile robots. An ad hoc algorithm for mapping based on the Occupancy Grid method has been developed. The algorithm includes some simplifications in order to be used with low-cost hardware resources. The proposed mapping system has been built in order to be completely autonomous and unassisted. The proposal has been tested with a mobile robot that uses infrared sensors to measure distances to obstacles and uses an ultrasonic beacon system for localization, besides wheel encoders. Finally, experimental results are presented.

  10. Optimal neural networks for protein-structure prediction

    International Nuclear Information System (INIS)

    Head-Gordon, T.; Stillinger, F.H.

    1993-01-01

    The successful application of neural-network algorithms for prediction of protein structure is stymied by three problem areas: the sparsity of the database of known protein structures, poorly devised network architectures which make the input-output mapping opaque, and a global optimization problem in the multiple-minima space of the network variables. We present a simplified polypeptide model residing in two dimensions with only two amino-acid types, A and B, which allows the determination of the global energy structure for all possible sequences of pentamer, hexamer, and heptamer lengths. This model simplicity allows us to compile a complete structural database and to devise neural networks that reproduce the tertiary structure of all sequences with absolute accuracy and with the smallest number of network variables. These optimal networks reveal that the three problem areas are convoluted, but that thoughtful network designs can actually deconvolute these detrimental traits to provide network algorithms that genuinely impact on the ability of the network to generalize or learn the desired mappings. Furthermore, the two-dimensional polypeptide model shows sufficient chemical complexity so that transfer of neural-network technology to more realistic three-dimensional proteins is evident

  11. THE USE OF LAPTOP COMPUTERS, TABLETS AND GOOGLE EARTH/GOOGLE MAPS APPLICATIONS DURING GEOGRAPHY CLUB SEMINARS

    Directory of Open Access Journals (Sweden)

    FLORIN GALBIN

    2015-01-01

    Full Text Available In the current study, we aim to investigate the use of Google Earth and Google Maps Applications on tablet and laptop computers. The research was carried out during the Geography Club seminars organized at “Radu Petrescu” High School in the 2013-2014 school year. The research involved 13 students in various gymnasium and high school grades. The activities included: navigation with Google Earth/Maps, image capturing techniques, virtual tours, measuring distances or river lengths, identifying relief forms, and locating geographical components of the environment. In order to retrieve students’ opinions regarding the use of tablets and laptop computers with these two applications, they were asked to respond to a questionnaire after the activities took place. Conclusions revealed that students enjoyed using these applications with laptops and tablets and that the learning process during Geography classes became more interesting.

  12. Optimizing a gap conductance model applicable to VVER-1000 thermal–hydraulic model

    International Nuclear Information System (INIS)

    Rahgoshay, M.; Hashemi-Tilehnoee, M.

    2012-01-01

    Highlights: ► Two known conductance models for application in VVER-1000 thermal–hydraulic code are examined. ► An optimized gap conductance model is developed which can predict the gap conductance in good agreement with FSAR data. ► The licensed thermal–hydraulic code is coupled with the gap conductance model predictor externally. -- Abstract: The modeling of gap conductance for application in VVER-1000 thermal–hydraulic codes is addressed. Two known models, namely CALZA-BINI and RELAP5 gap conductance models, are examined. By externally linking of gap conductance models and COBRA-EN thermal hydraulic code, the acceptable range of each model is specified. The result of each gap conductance model versus linear heat rate has been compared with FSAR data. A linear heat rate of about 9 kW/m is the boundary for optimization process. Since each gap conductance model has its advantages and limitation, the optimized gap conductance model can predict the gap conductance better than each of the two other models individually.

  13. Primal-dual convex optimization in large deformation diffeomorphic metric mapping: LDDMM meets robust regularizers

    Science.gov (United States)

    Hernandez, Monica

    2017-12-01

    This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.

  14. Dynamic Signal Strength Mapping and Analysis by Means of Mobile Geographic Information System

    Directory of Open Access Journals (Sweden)

    Kulawiak Marcin

    2017-12-01

    Full Text Available Bluetooth beacons are becoming increasingly popular for various applications such as marketing or indoor navigation. However, designing a proper beacon installation requires knowledge of the possible sources of interference in the target environment. While theoretically beacon signal strength should decay linearly with log distance, on-site measurements usually reveal that noise from objects such as Wi-Fi networks operating in the vicinity significantly alters the expected signal range. The paper presents a novel mobile Geographic Information System for measurement, mapping and local as well as online storage of Bluetooth beacon signal strength in semireal time. For the purpose of on-site geovisual analysis of the signal, the application integrates a dedicated interpolation algorithm optimized for low-power devices. The paper discusses the performance and quality of the mapping algorithms in several different test environments.

  15. Optimization and Characterization of CMOS for Ultra Low Power Applications

    International Nuclear Information System (INIS)

    Ajmal Kafeel, M.; Hasan, M.; Shah Alalm, M; Pable, S.D.

    2015-01-01

    Aggressive voltage scaling into the subthreshold operating region holds great promise for applications with strict energy budget. However, it has been established that higher speed super threshold device is not suitable for moderate performance subthreshold circuits. The design constraint for selecting V_th and T_ox is much more flexible for subthreshold circuits at low voltage level than super threshold circuits. In order to obtain better performance from a device under subthreshold conditions, it is necessary to investigate and optimize the process and geometry parameters of a Si MOSFET at nanometer technology node. This paper calibrates the fabrication process parameters and electrical characteristics for n- and p-MOSFET s with 35 nm physical gate length. Thereafter, the calibrated device for super threshold application is optimized for better performance under subthreshold conditions using TCAD simulation. The device simulated in this work shows 9.89% improvement in subthreshold slope and 34% advantage I_on/I_off in ratio for the same drive current.

  16. Optimization and Characterization of CMOS for Ultra Low Power Applications

    Directory of Open Access Journals (Sweden)

    Mohd. Ajmal Kafeel

    2015-01-01

    Full Text Available Aggressive voltage scaling into the subthreshold operating region holds great promise for applications with strict energy budget. However, it has been established that higher speed superthreshold device is not suitable for moderate performance subthreshold circuits. The design constraint for selecting Vth and TOX is much more flexible for subthreshold circuits at low voltage level than superthreshold circuits. In order to obtain better performance from a device under subthreshold conditions, it is necessary to investigate and optimize the process and geometry parameters of a Si MOSFET at nanometer technology node. This paper calibrates the fabrication process parameters and electrical characteristics for n- and p-MOSFETs with 35 nm physical gate length. Thereafter, the calibrated device for superthreshold application is optimized for better performance under subthreshold conditions using TCAD simulation. The device simulated in this work shows 9.89% improvement in subthreshold slope and 34% advantage in ION/IOFF ratio for the same drive current.

  17. [Application Progress of Three-dimensional Laser Scanning Technology in Medical Surface Mapping].

    Science.gov (United States)

    Zhang, Yonghong; Hou, He; Han, Yuchuan; Wang, Ning; Zhang, Ying; Zhu, Xianfeng; Wang, Mingshi

    2016-04-01

    The booming three-dimensional laser scanning technology can efficiently and effectively get spatial three-dimensional coordinates of the detected object surface and reconstruct the image at high speed,high precision and large capacity of information.Non-radiation,non-contact and the ability of visualization make it increasingly popular in three-dimensional surface medical mapping.This paper reviews the applications and developments of three-dimensional laser scanning technology in medical field,especially in stomatology,plastic surgery and orthopedics.Furthermore,the paper also discusses the application prospects in the future as well as the biomedical engineering problems it would encounter with.

  18. Boundary maps for C*-crossed products with R with an application to the quantum Hall effect

    CERN Document Server

    Kellendonk, J

    2003-01-01

    The boundary map in K-theory arising from the Wiener-Hopf extension of a crossed product algebra with $\\RR$ is the Connes-Thom isomorphism. In this article, the Wiener Hopf extension is combined with the Heisenberg group algebra to provide an elementary construction of a corresponding map in cyclic cohomology. It then follows directly from a non-commutative Stokes theorem that this map is dual w.r.t. Connes' pairing of cyclic cohomology with K-theory. As an application, we prove equality of quantized bulk and edge conductivities for the integer quantum Hall effect described by continuous magnetic Schrödinger operators.

  19. Optimization of design parameters for bulk micromachined silicon membranes for piezoresistive pressure sensing application

    International Nuclear Information System (INIS)

    Belwanshi, Vinod; Topkar, Anita

    2016-01-01

    Finite element analysis study has been carried out to optimize the design parameters for bulk micro-machined silicon membranes for piezoresistive pressure sensing applications. The design is targeted for measurement of pressure up to 200 bar for nuclear reactor applications. The mechanical behavior of bulk micro-machined silicon membranes in terms of deflection and stress generation has been simulated. Based on the simulation results, optimization of the membrane design parameters in terms of length, width and thickness has been carried out. Subsequent to optimization of membrane geometrical parameters, the dimensions and location of the high stress concentration region for implantation of piezoresistors have been obtained for sensing of pressure using piezoresistive sensing technique.

  20. Optimization of design parameters for bulk micromachined silicon membranes for piezoresistive pressure sensing application

    Science.gov (United States)

    Belwanshi, Vinod; Topkar, Anita

    2016-05-01

    Finite element analysis study has been carried out to optimize the design parameters for bulk micro-machined silicon membranes for piezoresistive pressure sensing applications. The design is targeted for measurement of pressure up to 200 bar for nuclear reactor applications. The mechanical behavior of bulk micro-machined silicon membranes in terms of deflection and stress generation has been simulated. Based on the simulation results, optimization of the membrane design parameters in terms of length, width and thickness has been carried out. Subsequent to optimization of membrane geometrical parameters, the dimensions and location of the high stress concentration region for implantation of piezoresistors have been obtained for sensing of pressure using piezoresistive sensing technique.

  1. Note: optical optimization for ultrasensitive photon mapping with submolecular resolution by scanning tunneling microscope induced luminescence.

    Science.gov (United States)

    Chen, L G; Zhang, C; Zhang, R; Zhang, X L; Dong, Z C

    2013-06-01

    We report the development of a custom scanning tunneling microscope equipped with photon collection and detection systems. The optical optimization includes the comprehensive design of aspherical lens for light collimation and condensing, the sophisticated piezo stages for in situ lens adjustment inside ultrahigh vacuum, and the fiber-free coupling of collected photons directly onto the ultrasensitive single-photon detectors. We also demonstrate submolecular photon mapping for the molecular islands of porphyrin on Ag(111) under small tunneling currents down to 10 pA and short exposure time down to 1.2 ms/pixel. A high quantum efficiency up to 10(-2) was also observed.

  2. Low Cost Multi-Sensor Robot Laser Scanning System and its Accuracy Investigations for Indoor Mapping Application

    Science.gov (United States)

    Chen, C.; Zou, X.; Tian, M.; Li, J.; Wu, W.; Song, Y.; Dai, W.; Yang, B.

    2017-11-01

    In order to solve the automation of 3D indoor mapping task, a low cost multi-sensor robot laser scanning system is proposed in this paper. The multiple-sensor robot laser scanning system includes a panorama camera, a laser scanner, and an inertial measurement unit and etc., which are calibrated and synchronized together to achieve simultaneously collection of 3D indoor data. Experiments are undertaken in a typical indoor scene and the data generated by the proposed system are compared with ground truth data collected by a TLS scanner showing an accuracy of 99.2% below 0.25 meter, which explains the applicability and precision of the system in indoor mapping applications.

  3. Electromagnetic Optimization Exploiting Aggressive Space Mapping

    DEFF Research Database (Denmark)

    Bandler, J. W.; Biernacki, R.; Chen, S.

    1995-01-01

    emerges after only six EM simulations with sparse frequency sweeps. Furthermore, less CPU effort is required to optimize the filter than is required by one single detailed frequency sweep. We also extend the SM concept to the parameter extraction phase, overcoming severely misaligned responses induced...

  4. Genome Maps, a new generation genome browser.

    Science.gov (United States)

    Medina, Ignacio; Salavert, Francisco; Sanchez, Rubén; de Maria, Alejandro; Alonso, Roberto; Escobar, Pablo; Bleda, Marta; Dopazo, Joaquín

    2013-07-01

    Genome browsers have gained importance as more genomes and related genomic information become available. However, the increase of information brought about by new generation sequencing technologies is, at the same time, causing a subtle but continuous decrease in the efficiency of conventional genome browsers. Here, we present Genome Maps, a genome browser that implements an innovative model of data transfer and management. The program uses highly efficient technologies from the new HTML5 standard, such as scalable vector graphics, that optimize workloads at both server and client sides and ensure future scalability. Thus, data management and representation are entirely carried out by the browser, without the need of any Java Applet, Flash or other plug-in technology installation. Relevant biological data on genes, transcripts, exons, regulatory features, single-nucleotide polymorphisms, karyotype and so forth, are imported from web services and are available as tracks. In addition, several DAS servers are already included in Genome Maps. As a novelty, this web-based genome browser allows the local upload of huge genomic data files (e.g. VCF or BAM) that can be dynamically visualized in real time at the client side, thus facilitating the management of medical data affected by privacy restrictions. Finally, Genome Maps can easily be integrated in any web application by including only a few lines of code. Genome Maps is an open source collaborative initiative available in the GitHub repository (https://github.com/compbio-bigdata-viz/genome-maps). Genome Maps is available at: http://www.genomemaps.org.

  5. The Enterprise Derivative Application: Flexible Software for Optimizing Manufacturing Processes

    Energy Technology Data Exchange (ETDEWEB)

    Ward, Richard C [ORNL; Allgood, Glenn O [ORNL; Knox, John R [ORNL

    2008-11-01

    The Enterprise Derivative Application (EDA) implements the enterprise-derivative analysis for optimization of an industrial process (Allgood and Manges, 2001). It is a tool to help industry planners choose the most productive way of manufacturing their products while minimizing their cost. Developed in MS Access, the application allows users to input initial data ranging from raw material to variable costs and enables the tracking of specific information as material is passed from one process to another. Energy-derivative analysis is based on calculation of sensitivity parameters. For the specific application to a steel production process these include: the cost to product sensitivity, the product to energy sensitivity, the energy to efficiency sensitivity, and the efficiency to cost sensitivity. Using the EDA, for all processes the user can display a particular sensitivity or all sensitivities can be compared for all processes. Although energy-derivative analysis was originally designed for use by the steel industry, it is flexible enough to be applied to many other industrial processes. Examples of processes where energy-derivative analysis would prove useful are wireless monitoring of processes in the petroleum cracking industry and wireless monitoring of motor failure for determining the optimum time to replace motor parts. One advantage of the MS Access-based application is its flexibility in defining the process flow and establishing the relationships between parent and child process and products resulting from a process. Due to the general design of the program, a process can be anything that occurs over time with resulting output (products). So the application can be easily modified to many different industrial and organizational environments. Another advantage is the flexibility of defining sensitivity parameters. Sensitivities can be determined between all possible variables in the process flow as a function of time. Thus the dynamic development of the

  6. Relaxations to Sparse Optimization Problems and Applications

    Science.gov (United States)

    Skau, Erik West

    Parsimony is a fundamental property that is applied to many characteristics in a variety of fields. Of particular interest are optimization problems that apply rank, dimensionality, or support in a parsimonious manner. In this thesis we study some optimization problems and their relaxations, and focus on properties and qualities of the solutions of these problems. The Gramian tensor decomposition problem attempts to decompose a symmetric tensor as a sum of rank one tensors.We approach the Gramian tensor decomposition problem with a relaxation to a semidefinite program. We study conditions which ensure that the solution of the relaxed semidefinite problem gives the minimal Gramian rank decomposition. Sparse representations with learned dictionaries are one of the leading image modeling techniques for image restoration. When learning these dictionaries from a set of training images, the sparsity parameter of the dictionary learning algorithm strongly influences the content of the dictionary atoms.We describe geometrically the content of trained dictionaries and how it changes with the sparsity parameter.We use statistical analysis to characterize how the different content is used in sparse representations. Finally, a method to control the structure of the dictionaries is demonstrated, allowing us to learn a dictionary which can later be tailored for specific applications. Variations of dictionary learning can be broadly applied to a variety of applications.We explore a pansharpening problem with a triple factorization variant of coupled dictionary learning. Another application of dictionary learning is computer vision. Computer vision relies heavily on object detection, which we explore with a hierarchical convolutional dictionary learning model. Data fusion of disparate modalities is a growing topic of interest.We do a case study to demonstrate the benefit of using social media data with satellite imagery to estimate hazard extents. In this case study analysis we

  7. Comparative evaluation of atom mapping algorithms for balanced metabolic reactions: application to Recon 3D.

    Science.gov (United States)

    Preciat Gonzalez, German A; El Assal, Lemmer R P; Noronha, Alberto; Thiele, Ines; Haraldsdóttir, Hulda S; Fleming, Ronan M T

    2017-06-14

    The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, many algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.

  8. Dynamic mobility applications policy analysis : policy and institutional issues for intelligent network flow optimization (INFLO).

    Science.gov (United States)

    2014-12-01

    The report documents policy considerations for the Intelligent Network Flow Optimization (INFLO) connected vehicle applications : bundle. INFLO aims to optimize network flow on freeways and arterials by informing motorists of existing and impen...

  9. Blocked edges on Eulerian maps and mobiles: application to spanning trees, hard particles and the Ising model

    International Nuclear Information System (INIS)

    Bouttier, J; Francesco, P Di; Guitter, E

    2007-01-01

    We introduce Eulerian maps with blocked edges as a general way to implement statistical matter models on random maps by a modification of intrinsic distances. We show how to code these dressed maps by means of mobiles, i.e. decorated trees with labelled vertices, leading to a closed system of recursion relations for their generating functions. We discuss particular solvable cases in detail, as well as various applications of our method to several statistical systems such as spanning trees on quadrangulations, mutually excluding particles on Eulerian triangulations or the Ising model on quadrangulations

  10. Industrial Application of Topology Optimization for Combined Conductive and Convective Heat Transfer Problems

    DEFF Research Database (Denmark)

    Zhou, Mingdong; Alexandersen, Joe; Sigmund, Ole

    2016-01-01

    This paper presents an industrial application of topology optimization for combined conductive and convective heat transfer problems. The solution is based on a synergy of computer aided design and engineering software tools from Dassault Systemes. The considered physical problem of steady......-state heat transfer under convection is simulated using SIMULIA-Abaqus. A corresponding topology optimization feature is provided by SIMULIA-Tosca. By following a standard workflow of design optimization, the proposed solution is able to accommodate practical design scenarios and results in efficient...

  11. Security Optimization for Distributed Applications Oriented on Very Large Data Sets

    Directory of Open Access Journals (Sweden)

    Mihai DOINEA

    2010-01-01

    Full Text Available The paper presents the main characteristics of applications which are working with very large data sets and the issues related to security. First section addresses the optimization process and how it is approached when dealing with security. The second section describes the concept of very large datasets management while in the third section the risks related are identified and classified. Finally, a security optimization schema is presented with a cost-efficiency analysis upon its feasibility. Conclusions are drawn and future approaches are identified.

  12. Optimal Network QoS over the Internet of Vehicles for E-Health Applications

    Directory of Open Access Journals (Sweden)

    Di Lin

    2016-01-01

    Full Text Available Wireless technologies are pervasive to support ubiquitous healthcare applications. However, a critical issue of using wireless communications under a healthcare scenario is the electromagnetic interference (EMI caused by RF transmission, and a high level of EMI may lead to a critical malfunction of medical sensors. In consideration of EMI on medical sensors, we study the optimization of quality of service (QoS within the whole Internet of vehicles for E-health and propose a novel model to optimize the QoS by allocating the transmit power of each user. Our results show that the optimal power control policy depends on the objective of optimization problems: a greedy policy is optimal to maximize the summation of QoS of each user, whereas a fair policy is optimal to maximize the product of QoS of each user. Algorithms are taken to derive the optimal policies, and numerical results of optimizing QoS are presented for both objectives and QoS constraints.

  13. Implementation of fast macromolecular proton fraction mapping on 1.5 and 3 Tesla clinical MRI scanners: preliminary experience

    Science.gov (United States)

    Yarnykh, V.; Korostyshevskaya, A.

    2017-08-01

    Macromolecular proton fraction (MPF) is a biophysical parameter describing the amount of macromolecular protons involved into magnetization exchange with water protons in tissues. MPF represents a significant interest as a magnetic resonance imaging (MRI) biomarker of myelin for clinical applications. A recent fast MPF mapping method enabled clinical translation of MPF measurements due to time-efficient acquisition based on the single-point constrained fit algorithm. However, previous MPF mapping applications utilized only 3 Tesla MRI scanners and modified pulse sequences, which are not commonly available. This study aimed to test the feasibility of MPF mapping implementation on a 1.5 Tesla clinical scanner using standard manufacturer’s sequences and compare the performance of this method between 1.5 and 3 Tesla scanners. MPF mapping was implemented on 1.5 and 3 Tesla MRI units of one manufacturer with either optimized custom-written or standard product pulse sequences. Whole-brain three-dimensional MPF maps obtained from a single volunteer were compared between field strengths and implementation options. MPF maps demonstrated similar quality at both field strengths. MPF values in segmented brain tissues and specific anatomic regions appeared in close agreement. This experiment demonstrates the feasibility of fast MPF mapping using standard sequences on 1.5 T and 3 T clinical scanners.

  14. Application of mapped plots for single-owner forest surveys

    Science.gov (United States)

    Paul C. Van Deusen; Francis Roesch

    2009-01-01

    Mapped plots are used for the nation forest inventory conducted by the U.S. Forest Service. Mapped plots are also useful foro single ownership inventoires. Mapped plots can handle boundary overlap and can aprovide less variable estimates for specified forest conditions. Mapping is a good fit for fixed plot inventories where the fixed area plot is used for both mapping...

  15. Applications of sub-optimality in dynamic programming to location and construction of nuclear fuel processing plant

    International Nuclear Information System (INIS)

    Thiriet, L.; Deledicq, A.

    1968-09-01

    First, the point of applying Dynamic Programming to optimization and Operational Research problems in chemical industries are recalled, as well as the conditions in which a dynamic program is illustrated by a sequential graph. A new algorithm for the determination of sub-optimal politics in a sequential graph is then developed. Finally, the applications of sub-optimality concept is shown when taking into account the indirect effects related to possible strategies, or in the case of stochastic choices and of problems of the siting of plants... application examples are given. (authors) [fr

  16. Development and Application of a Tool for Optimizing Composite Matrix Viscoplastic Material Parameters

    Science.gov (United States)

    Murthy, Pappu L. N.; Naghipour Ghezeljeh, Paria; Bednarcyk, Brett A.

    2018-01-01

    This document describes a recently developed analysis tool that enhances the resident capabilities of the Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) and its application. MAC/GMC is a composite material and laminate analysis software package developed at NASA Glenn Research Center. The primary focus of the current effort is to provide a graphical user interface (GUI) capability that helps users optimize highly nonlinear viscoplastic constitutive law parameters by fitting experimentally observed/measured stress-strain responses under various thermo-mechanical conditions for braided composites. The tool has been developed utilizing the MATrix LABoratory (MATLAB) (The Mathworks, Inc., Natick, MA) programming language. Illustrative examples shown are for a specific braided composite system wherein the matrix viscoplastic behavior is represented by a constitutive law described by seven parameters. The tool is general enough to fit any number of experimentally observed stress-strain responses of the material. The number of parameters to be optimized, as well as the importance given to each stress-strain response, are user choice. Three different optimization algorithms are included: (1) Optimization based on gradient method, (2) Genetic algorithm (GA) based optimization and (3) Particle Swarm Optimization (PSO). The user can mix and match the three algorithms. For example, one can start optimization with either 2 or 3 and then use the optimized solution to further fine tune with approach 1. The secondary focus of this paper is to demonstrate the application of this tool to optimize/calibrate parameters for a nonlinear viscoplastic matrix to predict stress-strain curves (for constituent and composite levels) at different rates, temperatures and/or loading conditions utilizing the Generalized Method of Cells. After preliminary validation of the tool through comparison with experimental results, a detailed virtual parametric study is

  17. Disparity Map Generation from Illumination Variant Stereo Images Using Efficient Hierarchical Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Viral H. Borisagar

    2014-01-01

    Full Text Available A novel hierarchical stereo matching algorithm is presented which gives disparity map as output from illumination variant stereo pair. Illumination difference between two stereo images can lead to undesirable output. Stereo image pair often experience illumination variations due to many factors like real and practical situation, spatially and temporally separated camera positions, environmental illumination fluctuation, and the change in the strength or position of the light sources. Window matching and dynamic programming techniques are employed for disparity map estimation. Good quality disparity map is obtained with the optimized path. Homomorphic filtering is used as a preprocessing step to lessen illumination variation between the stereo images. Anisotropic diffusion is used to refine disparity map to give high quality disparity map as a final output. The robust performance of the proposed approach is suitable for real life circumstances where there will be always illumination variation between the images. The matching is carried out in a sequence of images representing the same scene, however in different resolutions. The hierarchical approach adopted decreases the computation time of the stereo matching problem. This algorithm can be helpful in applications like robot navigation, extraction of information from aerial surveys, 3D scene reconstruction, and military and security applications. Similarity measure SAD is often sensitive to illumination variation. It produces unacceptable disparity map results for illumination variant left and right images. Experimental results show that our proposed algorithm produces quality disparity maps for both wide range of illumination variant and invariant stereo image pair.

  18. A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2015-01-01

    Full Text Available Particle swarm optimization (PSO is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO, population topology (as fully connected, von Neumann, ring, star, random, etc., hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization, extensions (to multiobjective, constrained, discrete, and binary optimization, theoretical analysis (parameter selection and tuning, and convergence analysis, and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms. On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms.

  19. The application of artificial intelligence in the optimal design of mechanical systems

    Science.gov (United States)

    Poteralski, A.; Szczepanik, M.

    2016-11-01

    The paper is devoted to new computational techniques in mechanical optimization where one tries to study, model, analyze and optimize very complex phenomena, for which more precise scientific tools of the past were incapable of giving low cost and complete solution. Soft computing methods differ from conventional (hard) computing in that, unlike hard computing, they are tolerant of imprecision, uncertainty, partial truth and approximation. The paper deals with an application of the bio-inspired methods, like the evolutionary algorithms (EA), the artificial immune systems (AIS) and the particle swarm optimizers (PSO) to optimization problems. Structures considered in this work are analyzed by the finite element method (FEM), the boundary element method (BEM) and by the method of fundamental solutions (MFS). The bio-inspired methods are applied to optimize shape, topology and material properties of 2D, 3D and coupled 2D/3D structures, to optimize the termomechanical structures, to optimize parameters of composites structures modeled by the FEM, to optimize the elastic vibrating systems to identify the material constants for piezoelectric materials modeled by the BEM and to identify parameters in acoustics problem modeled by the MFS.

  20. Convergent Cross Mapping: Basic concept, influence of estimation parameters and practical application.

    Science.gov (United States)

    Schiecke, Karin; Pester, Britta; Feucht, Martha; Leistritz, Lutz; Witte, Herbert

    2015-01-01

    In neuroscience, data are typically generated from neural network activity. Complex interactions between measured time series are involved, and nothing or only little is known about the underlying dynamic system. Convergent Cross Mapping (CCM) provides the possibility to investigate nonlinear causal interactions between time series by using nonlinear state space reconstruction. Aim of this study is to investigate the general applicability, and to show potentials and limitation of CCM. Influence of estimation parameters could be demonstrated by means of simulated data, whereas interval-based application of CCM on real data could be adapted for the investigation of interactions between heart rate and specific EEG components of children with temporal lobe epilepsy.

  1. Airfoil optimization for unsteady flows with application to high-lift noise reduction

    Science.gov (United States)

    Rumpfkeil, Markus Peer

    The use of steady-state aerodynamic optimization methods in the computational fluid dynamic (CFD) community is fairly well established. In particular, the use of adjoint methods has proven to be very beneficial because their cost is independent of the number of design variables. The application of numerical optimization to airframe-generated noise, however, has not received as much attention, but with the significant quieting of modern engines, airframe noise now competes with engine noise. Optimal control techniques for unsteady flows are needed in order to be able to reduce airframe-generated noise. In this thesis, a general framework is formulated to calculate the gradient of a cost function in a nonlinear unsteady flow environment via the discrete adjoint method. The unsteady optimization algorithm developed in this work utilizes a Newton-Krylov approach since the gradient-based optimizer uses the quasi-Newton method BFGS, Newton's method is applied to the nonlinear flow problem, GMRES is used to solve the resulting linear problem inexactly, and last but not least the linear adjoint problem is solved using Bi-CGSTAB. The flow is governed by the unsteady two-dimensional compressible Navier-Stokes equations in conjunction with a one-equation turbulence model, which are discretized using structured grids and a finite difference approach. The effectiveness of the unsteady optimization algorithm is demonstrated by applying it to several problems of interest including shocktubes, pulses in converging-diverging nozzles, rotating cylinders, transonic buffeting, and an unsteady trailing-edge flow. In order to address radiated far-field noise, an acoustic wave propagation program based on the Ffowcs Williams and Hawkings (FW-H) formulation is implemented and validated. The general framework is then used to derive the adjoint equations for a novel hybrid URANS/FW-H optimization algorithm in order to be able to optimize the shape of airfoils based on their calculated far

  2. Multivariate constrained shape optimization: Application to extrusion bell shape for pasta production

    Science.gov (United States)

    Sarghini, Fabrizio; De Vivo, Angela; Marra, Francesco

    2017-10-01

    Computational science and engineering methods have allowed a major change in the way products and processes are designed, as validated virtual models - capable to simulate physical, chemical and bio changes occurring during production processes - can be realized and used in place of real prototypes and performing experiments, often time and money consuming. Among such techniques, Optimal Shape Design (OSD) (Mohammadi & Pironneau, 2004) represents an interesting approach. While most classical numerical simulations consider fixed geometrical configurations, in OSD a certain number of geometrical degrees of freedom is considered as a part of the unknowns: this implies that the geometry is not completely defined, but part of it is allowed to move dynamically in order to minimize or maximize the objective function. The applications of optimal shape design (OSD) are uncountable. For systems governed by partial differential equations, they range from structure mechanics to electromagnetism and fluid mechanics or to a combination of the three. This paper presents one of possible applications of OSD, particularly how extrusion bell shape, for past production, can be designed by applying a multivariate constrained shape optimization.

  3. USGS US topo maps for Alaska

    Science.gov (United States)

    Anderson, Becci; Fuller, Tracy

    2014-01-01

    In July 2013, the USGS National Geospatial Program began producing new topographic maps for Alaska, providing a new map series for the state known as US Topo. Prior to the start of US Topo map production in Alaska, the most detailed statewide USGS topographic maps were 15-minute 1:63,360-scale maps, with their original production often dating back nearly fifty years. The new 7.5-minute digital maps are created at 1:25,000 map scale, and show greatly increased topographic detail when compared to the older maps. The map scale and data specifications were selected based on significant outreach to various map user groups in Alaska. This multi-year mapping initiative will vastly enhance the base topographic maps for Alaska and is possible because of improvements to key digital map datasets in the state. The new maps and data are beneficial in high priority applications such as safety, planning, research and resource management. New mapping will support science applications throughout the state and provide updated maps for parks, recreation lands and villages.

  4. AN APPLICATION OF MULTICRITERIA OPTIMIZATION TO THE TWO-CARRIER TWO-SPEED PLANETARY GEAR TRAINS

    Directory of Open Access Journals (Sweden)

    Jelena Stefanović-Marinović

    2017-04-01

    Full Text Available The objective of this study is the application of multi-criteria optimization to the two-carrier two-speed planetary gear trains. In order to determine mathematical model of multi-criteria optimization, variables, objective functions and conditions should be determined. The subject of the paper is two-carrier two-speed planetary gears with brakes on single shafts. Apart from the determination of the set of the Pareto optimal solutions, the weighted coefficient method for choosing an optimal solution from this set is also included in the mathematical model.

  5. Convergence of Hybrid Space Mapping Algorithms

    DEFF Research Database (Denmark)

    Madsen, Kaj; Søndergaard, Jacob

    2004-01-01

    may be poor, or the method may even fail to converge to a stationary point. We consider a convex combination of the space mapping technique with a classical optimization technique. The function to be optimized has the form \\$H \\$\\backslash\\$circ f\\$ where \\$H: \\$\\backslash\\$dR\\^m \\$\\backslash......\\$mapsto \\$\\backslash\\$dR\\$ is convex and \\$f: \\$\\backslash\\$dR\\^n \\$\\backslash\\$mapsto \\$\\backslash\\$dR\\^m\\$ is smooth. Experience indicates that the combined method maintains the initial efficiency of the space mapping technique. We prove that the global convergence property of the classical technique is also...

  6. Optimal control penalty finite elements - Applications to integrodifferential equations

    Science.gov (United States)

    Chung, T. J.

    The application of the optimal-control/penalty finite-element method to the solution of integrodifferential equations in radiative-heat-transfer problems (Chung et al.; Chung and Kim, 1982) is discussed and illustrated. The nonself-adjointness of the convective terms in the governing equations is treated by utilizing optimal-control cost functions and employing penalty functions to constrain auxiliary equations which permit the reduction of second-order derivatives to first order. The OCPFE method is applied to combined-mode heat transfer by conduction, convection, and radiation, both without and with scattering and viscous dissipation; the results are presented graphically and compared to those obtained by other methods. The OCPFE method is shown to give good results in cases where standard Galerkin FE fail, and to facilitate the investigation of scattering and dissipation effects.

  7. Scalability of Parallel Scientific Applications on the Cloud

    Directory of Open Access Journals (Sweden)

    Satish Narayana Srirama

    2011-01-01

    Full Text Available Cloud computing, with its promise of virtually infinite resources, seems to suit well in solving resource greedy scientific computing problems. To study the effects of moving parallel scientific applications onto the cloud, we deployed several benchmark applications like matrix–vector operations and NAS parallel benchmarks, and DOUG (Domain decomposition On Unstructured Grids on the cloud. DOUG is an open source software package for parallel iterative solution of very large sparse systems of linear equations. The detailed analysis of DOUG on the cloud showed that parallel applications benefit a lot and scale reasonable on the cloud. We could also observe the limitations of the cloud and its comparison with cluster in terms of performance. However, for efficiently running the scientific applications on the cloud infrastructure, the applications must be reduced to frameworks that can successfully exploit the cloud resources, like the MapReduce framework. Several iterative and embarrassingly parallel algorithms are reduced to the MapReduce model and their performance is measured and analyzed. The analysis showed that Hadoop MapReduce has significant problems with iterative methods, while it suits well for embarrassingly parallel algorithms. Scientific computing often uses iterative methods to solve large problems. Thus, for scientific computing on the cloud, this paper raises the necessity for better frameworks or optimizations for MapReduce.

  8. Designing area optimized application-specific network-on-chip architectures while providing hard QoS guarantees.

    Directory of Open Access Journals (Sweden)

    Sajid Gul Khawaja

    Full Text Available With the increase of transistors' density, popularity of System on Chip (SoC has increased exponentially. As a communication module for SoC, Network on Chip (NoC framework has been adapted as its backbone. In this paper, we propose a methodology for designing area-optimized application specific NoC while providing hard Quality of Service (QoS guarantees for real time flows. The novelty of the proposed system lies in derivation of a Mixed Integer Linear Programming model which is then used to generate a resource optimal Network on Chip (NoC topology and architecture while considering traffic and QoS requirements. We also present the micro-architectural design features used for enabling traffic and latency guarantees and discuss how the solution adapts for dynamic variations in the application traffic. The paper highlights the effectiveness of proposed method by generating resource efficient NoC solutions for both industrial and benchmark applications. The area-optimized results are generated in few seconds by proposed technique, without resorting to heuristics, even for an application with 48 traffic flows.

  9. A Map/INS/Wi-Fi Integrated System for Indoor Location-Based Service Applications.

    Science.gov (United States)

    Yu, Chunyang; Lan, Haiyu; Gu, Fuqiang; Yu, Fei; El-Sheimy, Naser

    2017-06-02

    In this research, a new Map/INS/Wi-Fi integrated system for indoor location-based service (LBS) applications based on a cascaded Particle/Kalman filter framework structure is proposed. Two-dimension indoor map information, together with measurements from an inertial measurement unit (IMU) and Received Signal Strength Indicator (RSSI) value, are integrated for estimating positioning information. The main challenge of this research is how to make effective use of various measurements that complement each other in order to obtain an accurate, continuous, and low-cost position solution without increasing the computational burden of the system. Therefore, to eliminate the cumulative drift caused by low-cost IMU sensor errors, the ubiquitous Wi-Fi signal and non-holonomic constraints are rationally used to correct the IMU-derived navigation solution through the extended Kalman Filter (EKF). Moreover, the map-aiding method and map-matching method are innovatively combined to constrain the primary Wi-Fi/IMU-derived position through an Auxiliary Value Particle Filter (AVPF). Different sources of information are incorporated through a cascaded structure EKF/AVPF filter algorithm. Indoor tests show that the proposed method can effectively reduce the accumulation of positioning errors of a stand-alone Inertial Navigation System (INS), and provide a stable, continuous and reliable indoor location service.

  10. Application of evolution strategy algorithm for optimization of a single-layer sound absorber

    Directory of Open Access Journals (Sweden)

    Morteza Gholamipoor

    2014-12-01

    Full Text Available Depending on different design parameters and limitations, optimization of sound absorbers has always been a challenge in the field of acoustic engineering. Various methods of optimization have evolved in the past decades with innovative method of evolution strategy gaining more attention in the recent years. Based on their simplicity and straightforward mathematical representations, single-layer absorbers have been widely used in both engineering and industrial applications and an optimized design for these absorbers has become vital. In the present study, the method of evolution strategy algorithm is used for optimization of a single-layer absorber at both a particular frequency and an arbitrary frequency band. Results of the optimization have been compared against different methods of genetic algorithm and penalty functions which are proved to be favorable in both effectiveness and accuracy. Finally, a single-layer absorber is optimized in a desired range of frequencies that is the main goal of an industrial and engineering optimization process.

  11. Digi Island: A Serious Game for Teaching and Learning Digital Circuit Optimization

    Science.gov (United States)

    Harper, Michael; Miller, Joseph; Shen, Yuzhong

    2011-01-01

    Karnaugh maps, also known as K-maps, are a tool used to optimize or simplify digital logic circuits. A K-map is a graphical display of a logic circuit. K-map optimization is essentially the process of finding a minimum number of maximal aggregations of K-map cells. with values of 1 according to a set of rules. The Digi Island is a serious game designed for aiding students to learn K-map optimization. The game takes place on an exotic island (called Digi Island) in the Pacific Ocean . The player is an adventurer to the Digi Island and will transform it into a tourist attraction by developing real estates, such as amusement parks.and hotels. The Digi Island game elegantly converts boring 1s and Os in digital circuits into usable and unusable spaces on a beautiful island and transforms K-map optimization into real estate development, an activity with which many students are familiar and also interested in. This paper discusses the design, development, and some preliminary results of the Digi Island game.

  12. The optimal version of Hua's fundamental theorem of geometry of rectangular matrices

    CERN Document Server

    Semrl, Peter

    2014-01-01

    Hua's fundamental theorem of geometry of matrices describes the general form of bijective maps on the space of all m\\times n matrices over a division ring \\mathbb{D} which preserve adjacency in both directions. Motivated by several applications the author studies a long standing open problem of possible improvements. There are three natural questions. Can we replace the assumption of preserving adjacency in both directions by the weaker assumption of preserving adjacency in one direction only and still get the same conclusion? Can we relax the bijectivity assumption? Can we obtain an analogous result for maps acting between the spaces of rectangular matrices of different sizes? A division ring is said to be EAS if it is not isomorphic to any proper subring. For matrices over EAS division rings the author solves all three problems simultaneously, thus obtaining the optimal version of Hua's theorem. In the case of general division rings he gets such an optimal result only for square matrices and gives examples ...

  13. Optimization and Control of Bilinear Systems Theory, Algorithms, and Applications

    CERN Document Server

    Pardalos, Panos M

    2008-01-01

    Covers developments in bilinear systems theory Focuses on the control of open physical processes functioning in a non-equilibrium mode Emphasis is on three primary disciplines: modern differential geometry, control of dynamical systems, and optimization theory Includes applications to the fields of quantum and molecular computing, control of physical processes, biophysics, superconducting magnetism, and physical information science

  14. Aerostructural optimization of a morphing wing for airborne wind energy applications

    Science.gov (United States)

    Fasel, U.; Keidel, D.; Molinari, G.; Ermanni, P.

    2017-09-01

    Airborne wind energy (AWE) vehicles maximize energy production by constantly operating at extreme wing loading, permitted by high flight speeds. Additionally, the wide range of wind speeds and the presence of flow inhomogeneities and gusts create a complex and demanding flight environment for AWE systems. Adaptation to different flow conditions is normally achieved by conventional wing control surfaces and, in case of ground generator-based systems, by varying the reel-out speed. These control degrees of freedom enable to remain within the operational envelope, but cause significant penalties in terms of energy output. A significantly greater adaptability is offered by shape-morphing wings, which have the potential to achieve optimal performance at different flight conditions by tailoring their airfoil shape and lift distribution at different levels along the wingspan. Hence, the application of compliant structures for AWE wings is very promising. Furthermore, active gust load alleviation can be achieved through morphing, which leads to a lower weight and an expanded flight envelope, thus increasing the power production of the AWE system. This work presents a procedure to concurrently optimize the aerodynamic shape, compliant structure, and composite layup of a morphing wing for AWE applications. The morphing concept is based on distributed compliance ribs, actuated by electromechanical linear actuators, guiding the deformation of the flexible—yet load-carrying—composite skin. The goal of the aerostructural optimization is formulated as a high-level requirement, namely to maximize the average annual power production per wing area of an AWE system by tailoring the shape of the wing, and to extend the flight envelope of the wing by actively alleviating gust loads. The results of the concurrent multidisciplinary optimization show a 50.7% increase of extracted power with respect to a sequentially optimized design, highlighting the benefits of morphing and the

  15. Multi-Modal, Multi-Touch Interaction with Maps in Disaster Management Applications

    Directory of Open Access Journals (Sweden)

    V. Paelke

    2012-07-01

    Full Text Available Multi-touch interaction has become popular in recent years and impressive advances in technology have been demonstrated, with the presentation of digital maps as a common presentation scenario. However, most existing systems are really technology demonstrators and have not been designed with real applications in mind. A critical factor in the management of disaster situations is the access to current and reliable data. New sensors and data acquisition platforms (e.g. satellites, UAVs, mobile sensor networks have improved the supply of spatial data tremendously. However, in many cases this data is not well integrated into current crisis management systems and the capabilities to analyze and use it lag behind sensor capabilities. Therefore, it is essential to develop techniques that allow the effective organization, use and management of heterogeneous data from a wide variety of data sources. Standard user interfaces are not well suited to provide this information to crisis managers. Especially in dynamic situations conventional cartographic displays and mouse based interaction techniques fail to address the need to review a situation rapidly and act on it as a team. The development of novel interaction techniques like multi-touch and tangible interaction in combination with large displays provides a promising base technology to provide crisis managers with an adequate overview of the situation and to share relevant information with other stakeholders in a collaborative setting. However, design expertise on the use of such techniques in interfaces for real-world applications is still very sparse. In this paper we report on interdisciplinary research with a user and application centric focus to establish real-world requirements, to design new multi-modal mapping interfaces, and to validate them in disaster management applications. Initial results show that tangible and pen-based interaction are well suited to provide an intuitive and visible way to

  16. Enhanced Map-Matching Algorithm with a Hidden Markov Model for Mobile Phone Positioning

    Directory of Open Access Journals (Sweden)

    An Luo

    2017-10-01

    Full Text Available Numerous map-matching techniques have been developed to improve positioning, using Global Positioning System (GPS data and other sensors. However, most existing map-matching algorithms process GPS data with high sampling rates, to achieve a higher correct rate and strong universality. This paper introduces a novel map-matching algorithm based on a hidden Markov model (HMM for GPS positioning and mobile phone positioning with a low sampling rate. The HMM is a statistical model well known for providing solutions to temporal recognition applications such as text and speech recognition. In this work, the hidden Markov chain model was built to establish a map-matching process, using the geometric data, the topologies matrix of road links in road network and refined quad-tree data structure. HMM-based map-matching exploits the Viterbi algorithm to find the optimized road link sequence. The sequence consists of hidden states in the HMM model. The HMM-based map-matching algorithm is validated on a vehicle trajectory using GPS and mobile phone data. The results show a significant improvement in mobile phone positioning and high and low sampling of GPS data.

  17. Adjoint Parameter Sensitivity Analysis for the Hydrodynamic Lattice Boltzmann Method with Applications to Design Optimization

    DEFF Research Database (Denmark)

    Pingen, Georg; Evgrafov, Anton; Maute, Kurt

    2009-01-01

    We present an adjoint parameter sensitivity analysis formulation and solution strategy for the lattice Boltzmann method (LBM). The focus is on design optimization applications, in particular topology optimization. The lattice Boltzmann method is briefly described with an in-depth discussion...

  18. A review of optimization techniques used in the design of fibre composite structures for civil engineering applications

    International Nuclear Information System (INIS)

    Awad, Ziad K.; Aravinthan, Thiru; Zhuge, Yan; Gonzalez, Felipe

    2012-01-01

    Highlights: → We reviewed existing optimization techniques of fibre composite structures. → Proposed an improved methodology for design optimization. → Comparison showed the MRDO is most suitable. -- Abstract: Fibre composite structures have become the most attractive candidate for civil engineering applications. Fibre reinforced plastic polymer (FRP) composite materials have been used in the rehabilitation and replacement of the old degrading traditional structures or build new structures. However, the lack of design standards for civil infrastructure limits their structural applications. The majority of the existing applications have been designed based on the research and guidelines provided by the fibre composite manufacturers or based on the designer's experience. It has been a tendency that the final structure is generally over-designed. This paper provides a review on the available studies related to the design optimization of fibre composite structures used in civil engineering such as; plate, beam, box beam, sandwich panel, bridge girder, and bridge deck. Various optimization methods are presented and compared. In addition, the importance of using the appropriate optimization technique is discussed. An improved methodology, which considering experimental testing, numerical modelling, and design constrains, is proposed in the paper for design optimization of composite structures.

  19. Optimization methods and applications in honor of Ivan V. Sergienko's 80th birthday

    CERN Document Server

    Pardalos, Panos; Shylo, Volodymyr

    2017-01-01

    Researchers and practitioners in computer science, optimization, operations research and mathematics will find this book useful as it illustrates optimization models and solution methods in discrete, non-differentiable, stochastic, and nonlinear optimization. Contributions from experts in optimization are showcased in this book showcase a broad range of applications and topics detailed in this volume, including pattern and image recognition, computer vision, robust network design, and process control in nonlinear distributed systems. This book is dedicated to the 80th birthday of Ivan V. Sergienko, who is a member of the National Academy of Sciences (NAS) of Ukraine and the director of the V.M. Glushkov Institute of Cybernetics. His work has had a significant impact on several theoretical and applied aspects of discrete optimization, computational mathematics, systems analysis and mathematical modeling.  .

  20. Application of Modern Fortran to Spacecraft Trajectory Design and Optimization

    Science.gov (United States)

    Williams, Jacob; Falck, Robert D.; Beekman, Izaak B.

    2018-01-01

    In this paper, applications of the modern Fortran programming language to the field of spacecraft trajectory optimization and design are examined. Modern object-oriented Fortran has many advantages for scientific programming, although many legacy Fortran aerospace codes have not been upgraded to use the newer standards (or have been rewritten in other languages perceived to be more modern). NASA's Copernicus spacecraft trajectory optimization program, originally a combination of Fortran 77 and Fortran 95, has attempted to keep up with modern standards and makes significant use of the new language features. Various algorithms and methods are presented from trajectory tools such as Copernicus, as well as modern Fortran open source libraries and other projects.

  1. Derivative-free generation and interpolation of convex Pareto optimal IMRT plans

    Science.gov (United States)

    Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2006-12-01

    In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.

  2. Derivative-free generation and interpolation of convex Pareto optimal IMRT plans

    International Nuclear Information System (INIS)

    Hoffmann, Aswin L; Siem, Alex Y D; Hertog, Dick den; Kaanders, Johannes H A M; Huizenga, Henk

    2006-01-01

    In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning

  3. Interactive Topology Optimization

    DEFF Research Database (Denmark)

    Nobel-Jørgensen, Morten

    Interactivity is the continuous interaction between the user and the application to solve a task. Topology optimization is the optimization of structures in order to improve stiffness or other objectives. The goal of the thesis is to explore how topology optimization can be used in applications...... on theory of from human-computer interaction which is described in Chapter 2. Followed by a description of the foundations of topology optimization in Chapter 3. Our applications for topology optimization in 2D and 3D are described in Chapter 4 and a game which trains the human intuition of topology...... optimization is presented in Chapter 5. Topology optimization can also be used as an interactive modeling tool with local control which is presented in Chapter 6. Finally, Chapter 7 contains a summary of the findings and concludes the dissertation. Most of the presented applications of the thesis are available...

  4. Abstract [NOMA-15: International workshop on nonlinear maps and their applications

    International Nuclear Information System (INIS)

    2016-01-01

    The International Workshop on Nonlinear Maps and their Applications (NOMA) is a series of international conferences. NOMA editions were held in Toulouse (Noma’07), Urbino (Noma’09), Évora (Noma’ 11) and Zaragoza (Noma’13). The fifth edition of NOMA was organised and hosted by the School of Electrical and Electronic Engineering of University College Dublin. This workshop brings together researchers from theoretical and application areas (mathematics, physics, engineering and economics) who study nonlinear discrete systems. Nonlinear iterative processes play an important role in physical, biological and social phenomena. Nonlinear mappings can directly model various systems or can be obtained using numerical methods permitting the solution of differential nonlinear equations. In both cases, the understanding of specific behaviours and bifurcations of these type of systems is of the greatest interest. This workshop is open to theoretical studies as well as applicative ones in the fields of physics, electronics, biology, computational methods, engineering, telecommunications and others. The scientific programme of NOMA’ 15 included 28 invited and regular lectures with 12 selected talks published in this special issue. Prof Vassili Gelfreich (the University of Warwick), Prof Daniele Fournier-Prunaret (INSA Toulouse), Prof Ricardo Lopez-Ruiz (the University of Zaragoza), Prof Sergio Callegari (the University of Bologna), Prof Yoshifumi Nishio (Tokushima University) and Dr Elena Blokhina (University College Dublin) have served as the editors of NOMA’2015 and selected the papers. On behalf of the scientific committee of NOMA, we would like to thank the editors and Eoghan O'Riordan and Panagiotis Giounanlis for their help in preparing this special issue. We are very grateful for the support of University College Dublin and to the School of Electrical and Electronic Engineering. (paper)

  5. Arctic Research Mapping Application (ARMAP): visualize project-level information for U.S. funded research in the Arctic

    Science.gov (United States)

    Kassin, A.; Cody, R. P.; Barba, M.; Escarzaga, S. M.; Score, R.; Dover, M.; Gaylord, A. G.; Manley, W. F.; Habermann, T.; Tweedie, C. E.

    2015-12-01

    The Arctic Research Mapping Application (ARMAP; http://armap.org/) is a suite of online applications and data services that support Arctic science by providing project tracking information (who's doing what, when and where in the region) for United States Government funded projects. In collaboration with 17 research agencies, project locations are displayed in a visually enhanced web mapping application. Key information about each project is presented along with links to web pages that provide additional information. The mapping application includes new reference data layers and an updated ship tracks layer. Visual enhancements are achieved by redeveloping the front-end from FLEX to HTML5 and JavaScript, which now provide access to mobile users utilizing tablets and cell phone devices. New tools have been added that allow users to navigate, select, draw, measure, print, use a time slider, and more. Other module additions include a back-end Apache SOLR search platform that provides users with the capability to perform advance searches throughout the ARMAP database. Furthermore, a new query builder interface has been developed in order to provide more intuitive controls to generate complex queries. These improvements have been made to increase awareness of projects funded by numerous entities in the Arctic, enhance coordination for logistics support, help identify geographic gaps in research efforts and potentially foster more collaboration amongst researchers working in the region. Additionally, ARMAP can be used to demonstrate past, present, and future research efforts supported by the U.S. Government.

  6. Application of energies of optimal frequency bands for fault diagnosis based on modified distance function

    Energy Technology Data Exchange (ETDEWEB)

    Zamanian, Amir Hosein [Southern Methodist University, Dallas (United States); Ohadi, Abdolreza [Amirkabir University of Technology (Tehran Polytechnic), Tehran (Iran, Islamic Republic of)

    2017-06-15

    Low-dimensional relevant feature sets are ideal to avoid extra data mining for classification. The current work investigates the feasibility of utilizing energies of vibration signals in optimal frequency bands as features for machine fault diagnosis application. Energies in different frequency bands were derived based on Parseval's theorem. The optimal feature sets were extracted by optimization of the related frequency bands using genetic algorithm and a Modified distance function (MDF). The frequency bands and the number of bands were optimized based on the MDF. The MDF is designed to a) maximize the distance between centers of classes, b) minimize the dispersion of features in each class separately, and c) minimize dimension of extracted feature sets. The experimental signals in two different gearboxes were used to demonstrate the efficiency of the presented technique. The results show the effectiveness of the presented technique in gear fault diagnosis application.

  7. Mapping in the cloud

    CERN Document Server

    Peterson, Michael P

    2014-01-01

    This engaging text provides a solid introduction to mapmaking in the era of cloud computing. It takes students through both the concepts and technology of modern cartography, geographic information systems (GIS), and Web-based mapping. Conceptual chapters delve into the meaning of maps and how they are developed, covering such topics as map layers, GIS tools, mobile mapping, and map animation. Methods chapters take a learn-by-doing approach to help students master application programming interfaces and build other technical skills for creating maps and making them available on the Internet. Th

  8. A Combination of Geographically Weighted Regression, Particle Swarm Optimization and Support Vector Machine for Landslide Susceptibility Mapping: A Case Study at Wanzhou in the Three Gorges Area, China.

    Science.gov (United States)

    Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian

    2016-05-11

    In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.

  9. Optimal refueling principle of research and test reactors and its application

    International Nuclear Information System (INIS)

    Peng Feng; Sun Shouhua; Bu Yongxi

    1993-01-01

    Based on basic formula for core refueling, the optimal refueling principle for cores with fuel assemblies of different burnup are suggested. Some conclusions derived from this principle are given. Calculation formula for different refueling scheme and computation programme are derived and used for the HFETR typical core loading with different refueling scheme. With the suggested core fuel consuming index, core fuel managements of 24 cycles in 10 years operation of HFETR were analyzed. Results show that the application of optimal refueling principle can greatly save the fuel consuming. Direction of HFETR core fuel management research was also di cussed

  10. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set

    Directory of Open Access Journals (Sweden)

    Jinshui Zhang

    2017-04-01

    Full Text Available This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD, to determine optimal parameters for support vector data description (SVDD model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient (C and kernel width (s, in mapping homogeneous specific land cover.

  11. T2* mapping for articular cartilage assessment: principles, current applications, and future prospects

    Energy Technology Data Exchange (ETDEWEB)

    Hesper, Tobias; Bittersohl, Daniela; Krauspe, Ruediger; Zilkens, Christoph [University Duesseldorf, Department of Orthopaedics Medical Faculty, Duesseldorf (Germany); Hosalkar, Harish S. [Center of Hip Preservation and Children' s Orthopaedics, San Diego, CA (United States); Welsch, Goetz H. [Medical University of Vienna, MR Center, Department of Radiology, Vienna (Austria); Bittersohl, Bernd [University Duesseldorf, Department of Orthopaedics Medical Faculty, Duesseldorf (Germany); Heinrich-Heine University, Medical School, Department of Orthopaedics, Duesseldorf (Germany)

    2014-10-15

    With advances in joint preservation surgery that are intended to alter the course of osteoarthritis by early intervention, accurate and reliable assessment of the cartilage status is critical. Biochemically sensitive MRI techniques can add robust biomarkers for disease onset and progression, and therefore, could be meaningful assessment tools for the diagnosis and follow-up of cartilage abnormalities. T2* mapping could be a good alternative because it would combine the benefits of biochemical cartilage evaluation with remarkable features including short imaging time and the ability of high-resolution three-dimensional cartilage evaluation - without the need for contrast media administration or special hardware. Several in vitro and in vivo studies, which have elaborated on the potential of cartilage T2* assessment in various cartilage disease patterns and grades of degeneration, have been reported. However, much remains to be understood and certain unresolved questions have become apparent with these studies that are crucial to the further application of this technique. This review summarizes the principles of the technique and current applications of T2* mapping for articular cartilage assessment. Limitations of recent studies are discussed and the potential implications for patient care are presented. (orig.)

  12. Thermo-economic design optimization of parabolic trough solar plants for industrial process heat applications with memetic algorithms

    International Nuclear Information System (INIS)

    Silva, R.; Berenguel, M.; Pérez, M.; Fernández-Garcia, A.

    2014-01-01

    Highlights: • A thermo-economic optimization of a parabolic-trough solar plant for industrial process heat applications is developed. • An analysis of the influence of economic cost functions on optimal design point location is presented. • A multi-objective optimization approach to the design routine is proposed. • A sensitivity analysis of the optimal point location to economic, operational, and ambient conditions is developed. • Design optimization of a parabolic trough plant for a reference industrial application is developed. - Abstract: A thermo-economic design optimization of a parabolic trough solar plant for industrial processes with memetic algorithms is developed. The design domain variables considered in the optimization routine are the number of collectors in series, number of collector rows, row spacing, and storage volume. Life cycle savings, levelized cost of energy, and payback time objective functions are compared to study the influence on optimal design point location. Furthermore a multi-objective optimization approach is proposed to analyze the design problem from a multi-economic criteria point of view. An extensive set of optimization cases are performed to estimate the influence of fuel price trend, plant location, demand profile, operation conditions, solar field orientation, and radiation uncertainty on optimal design. The results allow quantifying as thermo-economic design optimization based on short term criteria as the payback time leads to smaller plants with higher solar field efficiencies and smaller solar fractions, while the consideration of optimization criteria based on long term performance of the plants, as life cycle savings based optimization, leads to the reverse conclusion. The role of plant location and future evolution of gas prices in the thermo-economic performance of the solar plant has been also analyzed. Thermo-economic optimization of a parabolic trough solar plant design for the reference industrial

  13. Design and Optimization of UWB for Air Coupled GPR Applications

    Science.gov (United States)

    2014-10-01

    of 1°. 36 Figure 36: Measured (dashed) versus simulation (solid) radiation pattern at 1.9GHz Figure 37: Measured ...Evans, S. "Radio techniques for the measurement of ice thickness." Polar Record 11.73 (1963): 406-410. Federal Communications Commission ( FCC ...directed gain and radiation efficiency. DESIGN AND OPTIMIZATION OF UWB ANTENNA FOR AIR COUPLED GPR APPLICATIONS A

  14. Optimization (Alara) and probabilistic exposures: the application of optimization criteria to the control of risks due to exposures of a probabilistic nature

    International Nuclear Information System (INIS)

    Gonzalez, A.J.

    1989-01-01

    The paper described the application of the principles of optimization recommended by the International Commission on Radiological Protection (ICRP) to the restrain of radiation risks due to exposures that may or may not be incurred and to which a probability of occurrence can be assigned. After describing the concept of probabilistic exposures, it proposes a basis for a converging policy of control for both certain and probabilistic exposures, namely the dose-risk relationship adopted for radiation protection purposes. On that basis some coherent approaches for dealing with probabilistic exposures, such as the limitation of individual risks, are discussed. The optimization of safety for reducing all risks from probabilistic exposures to as-low-as-reasonably-achievable (ALARA) levels is reviewed in full. The principles of optimization of protection are used as a basic framework and the relevant factors to be taken into account when moving to probabilistic exposures are presented. The paper also reviews the decision-aiding techniques suitable for performing optimization with particular emphasis to the multi-attribute utility-analysis technique. Finally, there is a discussion on some practical application of decision-aiding multi-attribute utility analysis to probabilistic exposures including the use of probabilistic utilities. In its final outlook, the paper emphasizes the need for standardization and solutions to generic problems, if optimization of safety is to be successful

  15. Application of optimization numerical methods in calculation of the two-particle nuclear reactions

    International Nuclear Information System (INIS)

    Titarenko, N.N.

    1987-01-01

    An optimization packet of PEAK-OPT applied programs intended for solution of problems of absolute minimization of functions of many variables in calculations of cross sections of binary nuclear reactions is described. The main algorithms of computerized numerical solution of systems of nonlinear equations for the least square method are presented. Principles for plotting and functioning the optimization software as well as results of its practical application are given

  16. Soft Computing Optimizer For Intelligent Control Systems Design: The Structure And Applications

    Directory of Open Access Journals (Sweden)

    Sergey A. Panfilov

    2003-10-01

    Full Text Available Soft Computing Optimizer (SCO as a new software tool for design of robust intelligent control systems is described. It is based on the hybrid methodology of soft computing and stochastic simulation. It uses as an input the measured or simulated data about the modeled system. SCO is used to design an optimal fuzzy inference system, which approximates a random behavior of control object with the certain accuracy. The task of the fuzzy inference system construction is reduced to the subtasks such as forming of the linguistic variables for each input and output variable, creation of rule data base, optimization of rule data base and refinement of the parameters of the membership functions. Each task by the corresponding genetic algorithm (with an appropriate fitness function is solved. The result of SCO application is the design of Knowledge Base of a Fuzzy Controller, which contains the value information about developed fuzzy inference system. Such value information can be downloaded into the actual fuzzy controller to perform online fuzzy control. Simulations results of robust fuzzy control of nonlinear dynamic systems and experimental results of application on automotive semi-active suspension control are demonstrated.

  17. CALIBRATION, OPTIMIZATION, AND SENSITIVITY AND UNCERTAINTY ALGORITHMS APPLICATION PROGRAMMING INTERFACE (COSU-API)

    Science.gov (United States)

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and Parameter Estimation (UA/SA/PE API) tool development, here fore referred to as the Calibration, Optimization, and Sensitivity and Uncertainty Algorithms API (COSU-API), was initially d...

  18. Particle Swarm Optimization

    Science.gov (United States)

    Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw

    2002-01-01

    The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.

  19. Model-Based Analysis and Optimization of the Mapping of Cortical Sources in the Spontaneous Scalp EEG

    Directory of Open Access Journals (Sweden)

    Andrei V. Sazonov

    2007-01-01

    Full Text Available The mapping of brain sources into the scalp electroencephalogram (EEG depends on volume conduction properties of the head and on an electrode montage involving a reference. Mathematically, this source mapping (SM is fully determined by an observation function (OF matrix. This paper analyses the OF-matrix for a generation model for the desynchronized spontaneous EEG. The model involves a four-shell spherical volume conductor containing dipolar sources that are mutually uncorrelated so as to reflect the desynchronized EEG. The reference is optimized in order to minimize the impact in the SM of the sources located distant from the electrodes. The resulting reference is called the localized reference (LR. The OF-matrix is analyzed in terms of the relative power contribution of the sources and the cross-channel correlation coefficient for five existing references as well as for the LR. It is found that the Hjorth Laplacian reference is a fair approximation of the LR, and thus is close to optimum for practical intents and purposes. The other references have a significantly poorer performance. Furthermore, the OF-matrix is analyzed for limits to the spatial resolution for the EEG. These are estimated to be around 2 cm.

  20. Optimal control with aerospace applications

    CERN Document Server

    Longuski, James M; Prussing, John E

    2014-01-01

    Want to know not just what makes rockets go up but how to do it optimally? Optimal control theory has become such an important field in aerospace engineering that no graduate student or practicing engineer can afford to be without a working knowledge of it. This is the first book that begins from scratch to teach the reader the basic principles of the calculus of variations, develop the necessary conditions step-by-step, and introduce the elementary computational techniques of optimal control. This book, with problems and an online solution manual, provides the graduate-level reader with enough introductory knowledge so that he or she can not only read the literature and study the next level textbook but can also apply the theory to find optimal solutions in practice. No more is needed than the usual background of an undergraduate engineering, science, or mathematics program: namely calculus, differential equations, and numerical integration. Although finding optimal solutions for these problems is a...

  1. Online maps with APIs and webservices

    CERN Document Server

    Peterson, Michael P

    2014-01-01

    With the Internet now the primary method of accessing maps, this volume examines developments in the world of online map delivery, focusing in particular on application programmer interfaces such as the Google Maps API, and their utility in thematic mapping.

  2. On Dirichlet-to-Neumann Maps and Some Applications to Modified Fredholm Determinants

    OpenAIRE

    Gesztesy, Fritz; Mitrea, Marius; Zinchenko, Maxim

    2010-01-01

    We consider Dirichlet-to-Neumann maps associated with (not necessarily self-adjoint) Schrodinger operators in $L^2(\\Omega; d^n x)$, $n=2,3$, where $\\Omega$ is an open set with a compact, nonempty boundary satisfying certain regularity conditions. As an application we describe a reduction of a certain ratio of modified Fredholm perturbation determinants associated with operators in $L^2(\\Omega; d^n x)$ to modified Fredholm perturbation determinants associated with operators in $L^2(\\partial\\Om...

  3. Formal genetic maps

    African Journals Online (AJOL)

    Mohammad Saad Zaghloul Salem

    2014-12-24

    Dec 24, 2014 ... ome/transcriptome/proteome, experimental induced maps that are intentionally designed and con- ... genetic maps imposed their application in nearly all fields of medical genetics including ..... or genes located adjacent to, or near, them. ...... types of markers, e.g., clinical markers (eye color), genomic.

  4. Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.

    Science.gov (United States)

    Giridhar, K.

    The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal

  5. Optimization of selective inversion recovery magnetization transfer imaging for macromolecular content mapping in the human brain.

    Science.gov (United States)

    Dortch, Richard D; Bagnato, Francesca; Gochberg, Daniel F; Gore, John C; Smith, Seth A

    2018-03-24

    To optimize a selective inversion recovery (SIR) sequence for macromolecular content mapping in the human brain at 3.0T. SIR is a quantitative method for measuring magnetization transfer (qMT) that uses a low-power, on-resonance inversion pulse. This results in a biexponential recovery of free water signal that can be sampled at various inversion/predelay times (t I/ t D ) to estimate a subset of qMT parameters, including the macromolecular-to-free pool-size-ratio (PSR), the R 1 of free water (R 1f ), and the rate of MT exchange (k mf ). The adoption of SIR has been limited by long acquisition times (≈4 min/slice). Here, we use Cramér-Rao lower bound theory and data reduction strategies to select optimal t I /t D combinations to reduce imaging times. The schemes were experimentally validated in phantoms, and tested in healthy volunteers (N = 4) and a multiple sclerosis patient. Two optimal sampling schemes were determined: (i) a 5-point scheme (k mf estimated) and (ii) a 4-point scheme (k mf assumed). In phantoms, the 5/4-point schemes yielded parameter estimates with similar SNRs as our previous 16-point scheme, but with 4.1/6.1-fold shorter scan times. Pair-wise comparisons between schemes did not detect significant differences for any scheme/parameter. In humans, parameter values were consistent with published values, and similar levels of precision were obtained from all schemes. Furthermore, fixing k mf reduced the sensitivity of PSR to partial-volume averaging, yielding more consistent estimates throughout the brain. qMT parameters can be robustly estimated in ≤1 min/slice (without independent measures of ΔB 0 , B1+, and T 1 ) when optimized t I -t D combinations are selected. © 2018 International Society for Magnetic Resonance in Medicine.

  6. Coupled Low-thrust Trajectory and System Optimization via Multi-Objective Hybrid Optimal Control

    Science.gov (United States)

    Vavrina, Matthew A.; Englander, Jacob Aldo; Ghosh, Alexander R.

    2015-01-01

    The optimization of low-thrust trajectories is tightly coupled with the spacecraft hardware. Trading trajectory characteristics with system parameters ton identify viable solutions and determine mission sensitivities across discrete hardware configurations is labor intensive. Local independent optimization runs can sample the design space, but a global exploration that resolves the relationships between the system variables across multiple objectives enables a full mapping of the optimal solution space. A multi-objective, hybrid optimal control algorithm is formulated using a multi-objective genetic algorithm as an outer loop systems optimizer around a global trajectory optimizer. The coupled problem is solved simultaneously to generate Pareto-optimal solutions in a single execution. The automated approach is demonstrated on two boulder return missions.

  7. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Arumugam, Kamesh [Old Dominion Univ., Norfolk, VA (United States)

    2017-05-01

    Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore, these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address

  8. A molecular marker map for roses

    NARCIS (Netherlands)

    Debener, T.; Mattiesch, L.; Vosman, B.

    2001-01-01

    n addition to an existing core map for diploid roses which comprised 305 molecular markers 60 additional markers were mapped to extend the map. As a first application of the information contained in the map, the map position of a resistance gene from roses, Rdr1, was determined by identifying

  9. Constellation labeling optimization for bit-interleaved coded APSK

    Science.gov (United States)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  10. Applications of an alternative formulation for one-layer real time optimization

    Directory of Open Access Journals (Sweden)

    Schiavon Júnior A.L.

    2000-01-01

    Full Text Available This paper presents two applications of an alternative formulation for one-layer real time structure for control and optimization. This new formulation have arisen from predictive controller QDMC (Quadratic Dynamic Matrix Control, a type of predictive control (Model Predictive Control - MPC. At each sampling time, the values of the outputs of process are fed into the optimization-control structure which supplies the new values of the manipulated variables already considering the best conditions of process. The variables of optimization are both set-point changes and control actions. The future stationary outputs and the future stationary control actions have both a different formulation of conventional one-layer structure and they are calculated from the inverse gain matrix of the process. This alternative formulation generates a convex problem, which can be solved by less sophisticated optimization algorithms. Linear and nonlinear economic objective functions were considered. The proposed approach was applied to two linear models, one SISO (single-input/single output and the other MIMO (multiple-input/multiple-output. The results showed an excellent performance.

  11. A collimator optimization method for quantitative imaging: application to Y-90 bremsstrahlung SPECT.

    Science.gov (United States)

    Rong, Xing; Frey, Eric C

    2013-08-01

    Post-therapy quantitative 90Y bremsstrahlung single photon emission computed tomography (SPECT) has shown great potential to provide reliable activity estimates, which are essential for dose verification. Typically 90Y imaging is performed with high- or medium-energy collimators. However, the energy spectrum of 90Y bremsstrahlung photons is substantially different than typical for these collimators. In addition, dosimetry requires quantitative images, and collimators are not typically optimized for such tasks. Optimizing a collimator for 90Y imaging is both novel and potentially important. Conventional optimization methods are not appropriate for 90Y bremsstrahlung photons, which have a continuous and broad energy distribution. In this work, the authors developed a parallel-hole collimator optimization method for quantitative tasks that is particularly applicable to radionuclides with complex emission energy spectra. The authors applied the proposed method to develop an optimal collimator for quantitative 90Y bremsstrahlung SPECT in the context of microsphere radioembolization. To account for the effects of the collimator on both the bias and the variance of the activity estimates, the authors used the root mean squared error (RMSE) of the volume of interest activity estimates as the figure of merit (FOM). In the FOM, the bias due to the null space of the image formation process was taken in account. The RMSE was weighted by the inverse mass to reflect the application to dosimetry; for a different application, more relevant weighting could easily be adopted. The authors proposed a parameterization for the collimator that facilitates the incorporation of the important factors (geometric sensitivity, geometric resolution, and septal penetration fraction) determining collimator performance, while keeping the number of free parameters describing the collimator small (i.e., two parameters). To make the optimization results for quantitative 90Y bremsstrahlung SPECT more

  12. Participatory mapping new data, new cartography

    CERN Document Server

    Plantin, Jean-Christophe

    2014-01-01

    This book is intended for applications of online digital mapping, called mashups (or composite application), and to analyze the mapping practices in online socio-technical controversies. The hypothesis put forward is that the ability to create an online map accompanies the formation of online audience and provides support for a position in a debate on the Web.The first part provides a study of the map: - a combination of map and statistical reason- crosses between map theories and CIS theories- recent developments in scanning the map, from Geographic Information Systems (GIS) to Web map.The second part is based on a corpus of twenty "mashup" maps, and offers a techno-semiotic analysis highlighting the "thickness of the mediation" they are in a process of communication on the Web. Map as a device to "make do" is thus replaced through these stages of creation, ranging from digital data in their viewing, before describing the construction of the map as a tool for visual evidence in public debates, and ending wit...

  13. Deriving optimal exploration target zones on mineral prospectivity maps

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-08-01

    Full Text Available into an objective function in simulated annealing in order to derive a set of optimal exploration focal points. Each optimal exploration focal point represents a pixel or location within a circular neighborhood of pixels with high posterior probability of mineral...

  14. Conference on "Optimization, Control and Applications in the Information Age" : in Honor of Panos M. Pardalos’s 60th Birthday

    CERN Document Server

    Karakitsiou, Athanasia

    2015-01-01

    Recent developments in theory, algorithms, and applications in optimization and control are discussed in this proceedings, based on selected talks from the ‘Optimization, Control, and Applications in the Information Age’ conference, organized in honor of Panos Pardalos’s 60th birthday. This volume contains numerous applications to optimal decision making in energy production and fuel management, data mining, logistics, supply chain management, market network analysis, risk analysis, and community network analysis.  In addition, a short biography is included describing Dr. Pardalos’s path from a shepherd village on the high mountains of Thessaly to academic success. Due to the wide range of topics such as global optimization, combinatorial optimization, game theory, stochastics and programming contained in this publication, scientists, researchers, and students in optimization, operations research, analytics, mathematics and computer science will be interested in this volume.

  15. [Optimized application of nested PCR method for detection of malaria].

    Science.gov (United States)

    Yao-Guang, Z; Li, J; Zhen-Yu, W; Li, C

    2017-04-28

    Objective To optimize the application of the nested PCR method for the detection of malaria according to the working practice, so as to improve the efficiency of malaria detection. Methods Premixing solution of PCR, internal primers for further amplification and new designed primers that aimed at two Plasmodium ovale subspecies were employed to optimize the reaction system, reaction condition and specific primers of P . ovale on basis of routine nested PCR. Then the specificity and the sensitivity of the optimized method were analyzed. The positive blood samples and examination samples of malaria were detected by the routine nested PCR and the optimized method simultaneously, and the detection results were compared and analyzed. Results The optimized method showed good specificity, and its sensitivity could reach the pg to fg level. The two methods were used to detect the same positive malarial blood samples simultaneously, the results indicated that the PCR products of the two methods had no significant difference, but the non-specific amplification reduced obviously and the detection rates of P . ovale subspecies improved, as well as the total specificity also increased through the use of the optimized method. The actual detection results of 111 cases of malarial blood samples showed that the sensitivity and specificity of the routine nested PCR were 94.57% and 86.96%, respectively, and those of the optimized method were both 93.48%, and there was no statistically significant difference between the two methods in the sensitivity ( P > 0.05), but there was a statistically significant difference between the two methods in the specificity ( P PCR can improve the specificity without reducing the sensitivity on the basis of the routine nested PCR, it also can save the cost and increase the efficiency of malaria detection as less experiment links.

  16. Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization

    Science.gov (United States)

    Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard

    2002-01-01

    The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization is the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations as to the utility of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and truly discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented here. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization as well as the numerical noise and truly discrete variables present in the current example problem.

  17. Optimal covariate designs theory and applications

    CERN Document Server

    Das, Premadhis; Mandal, Nripes Kumar; Sinha, Bikas Kumar

    2015-01-01

    This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for...

  18. Validation and application of Acoustic Mapping Velocimetry

    Science.gov (United States)

    Baranya, Sandor; Muste, Marian

    2016-04-01

    The goal of this paper is to introduce a novel methodology to estimate bedload transport in rivers based on an improved bedform tracking procedure. The measurement technique combines components and processing protocols from two contemporary nonintrusive instruments: acoustic and image-based. The bedform mapping is conducted with acoustic surveys while the estimation of the velocity of the bedforms is obtained with processing techniques pertaining to image-based velocimetry. The technique is therefore called Acoustic Mapping Velocimetry (AMV). The implement